20 Sep

Testing TCP connections with PowerShell

I’ve been in the situation where I have needed to test whether I could make a TCP connection from one Windows host to another Windows host to verify that a network team had indeed opened firewall ports. It seems like a trivial thing to do. Just connect from host A to host B on the specified port. What programs can we use to do this? It’s a bit overkill to install a whole piece of server and client software to test this. Let alone read any necessary documentation to configure the correct port. It gets worse if things don’t work as you still don’t know whether it’s the firewall or your configuration!

Linux users could just install Netcat on both hosts and check this in a few seconds. Window users still have the ability to install some networking utilities similar to Netcat but I find them overly complicated considering 99% of the time I just want to know whether an intermediate firewall is blocking a connection.

PowerShell is so useful and gives you the full power of the .NET framework. This means we can create these utilities ourselves natively without installing any third party libraries.

I’ve created two functions, Listen-Tcp and Connect-Tcp, which have code listsings at the bottom of the post. The following is an example use of the utilities:

Listen-Tcp -port <Int32>
Connect-Tcp -hostname <string> -port <Int32>

Running the corresponding scripts on two hosts, hosta and hostb will give you the following outputs:

hosta > Listen-Tcp -port 3000
Listening on port 3000
Stopped Listening
hostb > Connect-Tcp -hostname "hosta" -port 3000
Data sent to and received from target successfully

For convenience I have added these functions to my PowerShell profile so that they are available on all servers I log into within the domain.

Below is the code listing

function Listen-Tcp()
{
	param(
		[Int32] $port
	)
	
	$server = New-Object -TypeName System.Net.Sockets.TcpListener -ArgumentList @([System.Net.IPAddress]::Any, $port)
	$server.Start()
	
	Write-Host ("Listening on port {0}" -f $port)
	$clientSocket = $server.AcceptSocket()
	
	$buffer = New-Object -TypeName byte[] -ArgumentList 4
	$clientSocket.Receive($buffer) | Out-Null
	
	$clientSocket.Send($buffer) | Out-Null
	$clientSocket.Close()
	
	$server.Stop()
	
	Write-Host "Stopped Listening"
}
function Connect-Tcp()
{
	param(
		[string]$hostname,
		[Int32]$port
	)
	
	try
	{
		$client = New-Object -TypeName System.Net.Sockets.TcpClient -ArgumentList $hostname,$port
		$stream = $client.GetStream()
		
		$buffer = [System.Text.Encoding]::ASCII.GetBytes("EHLO")
		$stream.Write($buffer, 0, $buffer.Length)
		
		$receiveBuffer = New-Object -TypeName byte[] -ArgumentList $buffer.Length
		$stream.Read($receiveBuffer, 0, $receiveBuffer.Length) | Out-Null
		
		$receivedText = [System.Text.Encoding]::ASCII.GetString($receiveBuffer)
		
		$stream.Close()
		$client.Close()
		
		if ($receivedText -eq "EHLO") {
			Write-Host "Data sent to and received from target successfully"
		} else {
			Write-Host "Data receieved was not as expected"
		}
	} Catch [Exception]
	{
		Write-Host "Could not connect to target machine"
	}
}
22 Aug

Splitting and Joining Files with PowerShell

Sometimes it is useful to be able to split large files into smaller chunks. This can be because the file is bigger than a file limit size for a particular communication or storage medium. There is plenty of software that will do just that. To name a few 7-zip, WinZip and WinRAR.

However as I usually have my PowerShell profile synced to all my machines it is an easy task to do in PowerShell. I wrote some PowerShell functions a while ago that split and joined files. Here are a few examples of how they should be used and then the code follows at the bottom:

Split-File -filename .\fileToSplit.dat -outprefix splitFilePrefix -splitSize 2M
Join-File -filename .\splitFilePrefix.001 -outfile CopyOfFileToSplit.dat

You can specify the split size using the suffixes K, M and G for kilobytes, megabytes and gigabytes respectively.

Note that the file locations are relative to the processes current working directory and not PowerShell’s current location. To avoid confusion and strange behaviour use absolute paths. If you want to understand more about the difference then I recommend you check out this blog which came out near the top when googling for an insightful link, http://www.beefycode.com/post/The-Difference-between-your-Current-Directory-and-your-Current-Location.aspx

Here are the functions below:

function Split-File()
{
	param
	(
		[string] $filename = $(throw "file required"),
		[string] $outprefix = $(throw "outprefix required"),
		[string] $splitSize = "50M",
		[switch] $Quiet
	)
	
	$match = [System.Text.RegularExpressions.Regex]::Match($splitSize, "^(\d+)([BKMGbkmg]?)$")
	[int64]$size = $match.Groups[1].Value
	$sizeUnit = $match.Groups[2].Value.ToUpper()
	$sizeUnitValue = 0
	switch($sizeUnit)
	{
		"K" { $sizeUnitValue = 1024 }
		"M" { $sizeUnitValue = 1048576 }
		"G" { $sizeUnitValue = 1073741824 }
		default { $sizeUnitValue = 1 }
	}
	
	$size = $sizeUnitValue * $size
	
	Write-Host ("Size Split is {0}" -f $size) -ForegroundColor Magenta
	
	$outFilePrefix = [System.IO.Path]::Combine((Get-Location).Path, $outprefix)
	
	$inFileName = [IO.Path]::Combine((Get-Location).Path,$filename)
	
	Write-Host ("Input File full path is {0}" -f $inFileName)
	
	if ([IO.File]::Exists($inFileName) -ne $true)
	{
		Write-Host ("{0} does not exist" -f $inFileName) -ForegroundColor Red
		return
	}
	
	$bufferSize = 1048576
	
	$ifs = [IO.File]::OpenRead($inFileName)
	$ofs = $null
	$buffer = New-Object -typeName byte[] -ArgumentList $bufferSize
	$outFileCounter = 0
	$bytesReadTotal = 0
	
	$bytesRead = 1 #Non zero starting number to ensure loop entry
	while ($bytesRead -gt 0)
	{
		$bytesToRead = [Math]::Min($size-$bytesReadTotal, $bufferSize)
		$bytesRead = $ifs.Read($buffer, 0, $bytesToRead)
		
		if ($bytesRead -ne 0)
		{		
			if ($ofs -eq $null)
			{
				$outFileCounter++
				$ofsName = ("{0}.{1:D3}" -f $outFilePrefix,$outFileCounter)
				$ofs = [IO.File]::OpenWrite($ofsName)
				if ($Quiet -ne $true)
				{
					Write-Host ("Created file {0}" -f $ofsName) -ForegroundColor Yellow
				}
			}
			
			$ofs.Write($buffer, 0, $bytesRead)
			$bytesReadTotal += $bytesRead
			
			if ($bytesReadTotal -ge $size)
			{
				$ofs.Close()
				$ofs.Dispose()
				$ofs = $null
				$bytesReadTotal = 0
			}
		}
	}
	
	if ($ofs -ne $null)
	{
		$ofs.Close()
		$ofs.Dispose()
	}
	
	Write-Host "Finished"
	
	$ifs.Close()
	$ifs.Dispose()
}

function Join-File()
{
	param
	(
		[string] $filename = $(throw "filename required"),
		[string] $outfile	= $(throw "out filename required")
	)
	
	$outfilename = [IO.Path]::Combine((Get-Location).Path, $outfile)
	$ofs = [IO.File]::OpenWrite($outfilename)
	
	$match = [System.text.RegularExpressions.Regex]::Match([IO.Path]::Combine((Get-Location).Path,$filename), "(.+)\.\d+$")
	if ($match.Success -ne $true)
	{
		Write-Host "Unrecognised filename format" -FroegroundColor Red
	}
	$fileprefix = $match.Groups[1].Value
	$filecounter = 1
	$bufferSize = 1048576
	$buffer = New-Object -TypeName byte[] -ArgumentList $bufferSize
	
	while ([IO.File]::Exists(("{0}.{1:D3}" -f $fileprefix,$filecounter)))
	{
		$ifs = [IO.File]::OpenRead(("{0}.{1:D3}" -f $fileprefix,$filecounter))
		
		$bytesRead = $ifs.Read($buffer, 0, $bufferSize)
		while ($bytesRead -gt 0)
		{
			$ofs.Write($buffer,0,$bytesRead)
			$bytesRead = $ifs.Read($buffer, 0, $bufferSize)
		}		
		
		$ifs.Close()
		$ifs.Dispose()
	
		$filecounter++
	}
	
	$ofs.Close()
	$ofs.Dispose()

	Write-Host ("{0} created" -f $outfilename) -ForegroundColor Yellow
}
17 Aug

Installing Cyanogen 12.1 on the Samsung Galaxy S2 (i9100)

If you have an old Samsung Galaxy S2 lying about and you wish to give it a new lease of life then you might consider flashing it with Cyanogen Mod.

I did this recently but didn’t follow the instructions provided by Cyanogen because I couldn’t get the Heimdall suite working properly on a Windows machine.

Instead I decided to use Odin to follow the instructions and so I’ve set out my steps below to get Cyanogen 12.1 running on my Galaxy S2. Note that I take no responsibility to what happens to your device or data and you should make appropriate backups prior to this as the below instructions are destructive to the data on the phone.

First thing first, make sure you have installed Android Studio and the SDK Manager so that you have adb and the USB drivers installed. If you don’t want to install the entire SDK and Android Studio then you can download the platform tools directly from Google https://dl-ssl.google.com/android/repository/platform-tools_r23-windows.zip. The version linked has been verified to work with the Samsung S2 on a Windows 10 machine.

Next we need to get a hold of Odin, I’ve used Odin 3.10 to perform the installation. Getting a good copy of Odin is a high priority for me however I do trust the guys at chainfire and they package a version of Odin up with their auto root packages. Head over to https://autoroot.chainfire.eu/ and download a package with Odin 3.10 included. I chose to use the package for the S4 (i9505) (https://download.chainfire.eu/316/CF-Root/CF-Auto-Root/CF-Auto-Root-jflte-jfltexx-gti9505.zip).

Extract Odin from the autoroot package by extracting the zip file and deleting the .tar.md5 file that is included.

Next we need to flash a custom recovery.

Instructions found at https://wiki.cyanogenmod.org/w/Install_CM_for_i9100#Installing_a_custom_recovery suggest this recovery, and I had no problems with this image.

Once it is downloaded, open Odin as an administrator, put your Galaxy S2 into download mode (shut it down, hold the volume down + home + power buttons until a disclaimer appears, which you then need to accept). Connect the Galaxy S2 to the computer via USB cable. In Odin, you will see a message saying the device has been added. If not, then you will need to check your USB drivers (please Google for this problem).

In Odin, click on the AP button and then navigate to the recovery image you just downloaded and select it. Click start and wait for it to flash the recovery.

When Odin completes it will reset the phone, you will need to be ready to put the phone into recovery mode as soon as it is restarted. (To get into recovery mode hold volume up + home + power button until you see the Samsung logo, at which point you should release the power button and wait for it to enter recovery mode). If it doesn’t go into recovery mode and instead continues to boot then just pull the battery and retry entering recovery mode.)

If all has been successful then you should be in a custom recovery at the moment. Select the wipe/factory reset option and then select Install Zip option followed by Install Zip from Sideload.

At this point your phone should be ready to accept the Cyanogen package from your computer via adb. From the Cyanogen site (https://download.cyanogenmod.org/?device=i9100) download the Cyanogen 12.1 package (https://download.cyanogenmod.org/get/jenkins/135113/cm-12.1-20151116-SNAPSHOT-YOG7DAO1JN-i9100.zip).

Open up a command prompt where adb can be executed and then type the following:

adb sideload <path/to/cm-12.1-20151116-SNAPSHOT-YOG7DAO1JN-i9100.zip>

Once the installation is complete it will return to the parent menu and you can then choose to reboot the system.

If everything went to plan then your phone should boot into Cyanogen 12.1.

You may consider installing the Google Play Store by following this link so that you can installed Apps from the Play Store.

29 Jul

Containerising your Word Press blog with Docker

In this post, I will talk you through how you can create a copy of your live Word Press blog as a Docker container. The reason I wanted to do this, is so that I can test actual updates and themes to my live site before making them publicly available and seeing how any tested plug-ins interact with actual live content.

The first thing you need to do is make sure you have Docker installed or have access to a Docker host.

Pull down the Word Press, MySQL and PhpMyAdmin prebuilt Docker images from the Docker hub

docker pull wordpress
docker pull mysql
docker pull phpmyadmin/phpmyadmin

Just to note at the time of writing this post the tags for each where:

Image Tag Dockerfile
wordpress 4.5 https://github.com/docker-library/wordpress/blob/6afa0720da89f31d6c61fd38bb0d6de6e9a14a49/apache/Dockerfile
mysql 5.7.13 https://github.com/docker-library/mysql/blob/f7a67d7634a68d319988ad6f99729bfeaa84ceb2/5.7/Dockerfile
phpmyadmin/phpmyadmin 4.6.3-1 (from GitHub tag) https://github.com/phpmyadmin/docker/blob/4.6.3-1/Dockerfile

We now need to download a snapshot of your current live word press blog. This is very hosting company specific but in the lowest generic terms possible you need to download all files that come under the www root directory on your hosting companies servers and to export a copy of your Word Press MySQL database. In the rest of this post I will refer to the copy of the www root directory as the Word Press files and I will refer to the export of the MySQL database as the database.

At this point we need to run up a MySQL and PhpMyAdmin container. Execute the following on the Docker host.

docker run --name wp-mysql -e MYSQL_ROOT_PASSWORD=my-secret-pwd -d mysql
docker run --name phpmyadmin -d --link wp-mysql:db -p 8081:80 phpmyadmin/phpmyadmin

A MySQL server instance will now be running on the Docker host and it can be administered from the PhpMyAdmin instance running on port 8081 of the Docker host.

Use the PhpMyAdmin instance with the MySQL root password set above to create a new user and make sure you select to create a database with the same name. After this, import your database files using the import option of PhpMyAdmin.

Once you have imported the database modify the kvht_options table so that the records with the following name are updated accordingly:

name value
siteurl <docker-host-hostname-or-IP-address>:8080
home <docker-host-hostname-or-IP-address>:8080
upload_path /var/www/html/wp-content/uploads

Note: The table name may have a different prefix to mine or no prefix at all.

At this point if you have any media which is embedded using an absolute URL to your actual live Word Press site then you need to update individual posts to make the URL’s server relative. This is not taken care of in this post as I always use server root relative paths for this type of media. A quick trick would be to do an appropriate regex find and replace in the exported SQL file before the import.

Now we need to run an instance of the Word Press image but using our Word Press files.

docker run --name wp-blog --link wp-mysql:mysql -p 8080:80 -e MYSQL_ENV_MYSQL_USER=database_user -e MYSQL_ENV_MYSQL_PASSWORD=database_user_password -e MYSQL_ENV_MYSQL_DATABASE=database_name -v /path/to/wordpress/files:/var/www/html -d wordpress

At this point you should be able to access your Word Press site on port 8080 of the Docker host.

For those that use boot2docker (even through docker-machine) your Word Press file directory may not be writeable by the Word Press docker container. To fix this I built the Word Press image directly from the associated Dockerfile with an extra command to modify the www-data user inside the image so that it’s UID matched a user’s UID on the source file systems directory that had write access to the Word Press files.

15 Jun

Detecting a loop in a linked list

I was recently asked about Floyd’s cycle-finding algorithm, see here for details. In particular the person wanted to know what speeds the two pointers could move at for the algorithm to work. In Floyd’s algorithm the pointers move 1 step and 2 steps at a time respectively.

It’s an interesting question which touches on an element of number theory called modulo arithmetic and the study of congruence relations.

linked list

In Floyd’s algorithm the “hare” and “tortoise” both start at the beginning of the linked list at time, t = 0. At each step the “tortoise” moves 1 place and the hare moves 2 places. Given an arbitrary linked list with a cycle then it is clear the “tortoise” and “hare” won’t meet until they are both in the cycle. For this reason we don’t need to know how many steps it took for both of them to be in the cycle, and we only care from the point that both are in the cycle and we will actually call this t = 0. We will call linked list and linked list the positions that the “tortoise” and “hare” are in on the earliest step that both are in the cycle.

We don’t know how long the cycle is in length, but we will call it, n. We label the first element in the cycle 0, the second 1, the third 2, …., the n-th will be labelled n-1.

If we are at the position labelled n-1 and we move one place then we are at the position labelled 0.

linked list

The above diagram demonstrates what the values of t, n, linked list, linked list will be for the above linked list.

To give a few concrete examples we will pretend to be working with a cycle of length 12 as this gives us the familiar intuition that we have from using our watches.

linked list

however we will relabel the 12 position to be 0.

linked list

Now consider the “tortoise” and “hare” starting at positions 0 and 1 respectively. If the “tortoise” and “hare” move 1 place and 2 places respectively at a time, will they ever meet?

linked list

A quick bit of working with some pen and paper will yield the result that after 11 steps the “tortoise” will be at position 11 and the “hare” would have gone round the watch once and reached position 11 again.

It is true that with the “tortoise” moving 1 place and the “hare” moving 2 places at a time that they will always meet eventually regardless of their start positions (linked list, linked list). The reasons for this will be described later.

What we would like to know is whether other values that the “tortoise” and “hare” can move at each step yields the same result of them meeting eventually.

Consider the start positions as we have above but this time the “tortoise” moves 3 places and the “hare” moves 5 places at a time. Will they meet then?

The positions at each step for the first 15 steps is shown below.

Step Tortoise Hare
0 0 1
1 3 6
2 6 11
3 9 4
4 0 9
5 3 2
6 6 7
7 9 0
8 0 5
9 3 10
10 6 3
11 9 8
12 0 1
13 3 6
14 6 11
15 9 4

You may have noticed that at step 12 we have started to repeat the pattern of step 0. The “tortoise” and “hare” didn’t meet in the first 12 steps and so will never meet. Clearly steps of 3 and 5 don’t work for Floyd’s algorithm and these start positions. However if the “tortoise” started at position 0 and the “hare” at position 2 then it is easy to check that they will meet, so the start positions do matter as well!

Before we write out the problem more formally, let’s first formalise this “clock” arithmetic we have been using implicitly.

We say that linked list if (b – a) is a multiple of n.

The concrete examples we will give will use our familiar clock.

linked list — that is to say that 1pm and 1am look the same on a 12 hour clock. They both use the 1 position.
linked list — that is to say if we start at 1pm and add 24 hours then it is still 1pm but just a day later

In both examples 13 – 1 = 12 and 25 – 1 = 24 are both multiples of 12.

Given the start positions linked list and linked list and the places moved at each step linked list and linked list and a cycle of length n then we are looking for a solution to the following equation:

linked list

This can be rewritten to:

linked list

This is a standard equation in number theory and it has a solution if and only if linked list

The gcd is the greatest common divisor. More information can be found at gcd.

Given that the length of the cycle n and the start positions are arbitrary the only way to guarantee that the above equation has a solution is to ensure that linked list and the only way to guarantee this for all n is to make linked list. In other words the places that the “tortoise” and “hare” move at a time must differ by 1.

This explains why the choice of 1 and 2 work for all linked lists with a cycle, however values such as 4 and 5 will also work. 99 and 100 or even the values 121 and 120!

02 Jun

Creating a private Ethereum block chain and crypto currency

I’ve recently been playing with a few block chain implementations and looking at the technologies use to the financial services sector.

One of the block chain implementations I’ve been looking at is Ethereum.

This blog post is an all in one guide to getting up and running with Ethereum on a private network. This blog post will cover the following

  1. Starting a blockchain with Ethereum
  2. Mining with an Ethereum node
  3. Connecting the 2 nodes and sending a test transaction
  4. Creating a cryptocurrency called Innovions
  5. Performing a transaction with Innovions

Starting a blockchain with Ethereum

Make sure you have installed geth locally. There is a docker container for geth in the docker hub which can be used for testing purposes.

The first thing we need to do is create the initialisation settings for the private block chain. Create a file called genesis.block.json and populate it’s contents as follows:

{
	"nonce": "0xfaceb00cfaceb00c",
	"timestamp": "0x0",
	"parentHash": "0x0000000000000000000000000000000000000000000000000000000000000000",
	"extraData": "0x0",
	"gasLimit": "0x8000000",
	"difficulty": "0x400",
	"mixhash": "0x0000000000000000000000000000000000000000000000000000000000000000",
	"coinbase": "0x3333333333333333333333333333333333333333",
	"alloc": {
	}
}

launch geth with the following command to initialise the block chain on this node:

geth --datadir /path/to/store/nodedata init /path/to/genesis.block.json

If successful the output will be similar to:

I0531 11:13:39.555651 ethdb/database.go:82] Alloted 16MB cache and 16 file handles to node1/chaindata
I0531 11:13:39.560438 cmd/geth/main.go:353] successfully wrote genesis block and/or chain rule set: ba4fe4055a968c1b05a1254289164e7665cfef89782dcc7dcaec2e5e4edc83a6

To start Ethereum in console mode run the following:

geth --datadir nodedata --nodiscover --networkid 222 console

Note: Just make sure the networkid is unique to any other ethereum networks that you may be able to contact. The main network uses either 0 or 1 as it’s network id.

Mining with an Ethereum node

Before we can begin mining we need to create an account. At the geth console type the following:
> personal.newAccount("insecure_password")
"0x160b182bc3fed971000a05a5dc9eff821ad63f21"

The string “0x160b182bc3fed971000a05a5dc9eff821ad63f21” is our address of our account on the network and “insecure_password” is a password which protects the private key of this address on this node.

Ethereum works on a base currency called ether. You need ether to send transactions. First we will check our ether balance.
Type the following into the console:

function checkAllBalances() { 
    var i =0; 
    eth.accounts.forEach( function(e){
        console.log("  eth.accounts["+i+"]: " +  e + " \tbalance: " + web3.fromWei(eth.getBalance(e), "ether") + " ether"); 
        i++; 
    })
};

and then type the following at the console:
> checkAllBalances()
eth.accounts[0]: 0x160b182bc3fed971000a05a5dc9eff821ad63f21 balance: 0 ether

You will see that we have no ether. Let’s mine to gather some.

miner.start();

Mining should begin after the DAG is generated. We should gather ether quickly enough that after 5 minutes we can check our balance again and we will see that we have acquired ether.

> checkAllBalances()
eth.accounts[0]: 0x160b182bc3fed971000a05a5dc9eff821ad63f21 balance: 5 ether

This confirms that we have successfully mined. Next we will connect a second node.

Connecting the 2 nodes and sending a test transaction

Reperform the initialisation step as we did on the first node using the same genesis.block.json file but on a secondary node. Ideally use a second computer however if you want to run both nodes on the same computer use a different port when running geth by specifying –port “65432” on the command line.

geth --datadir nodedata init genesis.block.json

Start Ethereum in console mode by running the following:

geth --datadir nodedata --nodiscover --networkid 222 console

now type the following to get the address of the second node:
> admin.nodeInfo.enode
"enode://33b87bfabdeace1b40faf7219870f716ded69e04be409278e9673d80e475bac3e[email protected][::]:30303?discport=0"

Replace the [::] in the enode address with the IP address of the second node. For example, if the second node has IP address 172.17.100.3 then replace [::] with [172.17.100.3].

On the first node type the following:

admin.addPeer("enode://33b87bfabdeace1b40faf7219870f716ded69e04be409278e9673d80e475bac3e[email protected][172.17.0.3]:30303?discport=0");

If successful running admin.peers.length on both nodes should return 1.

Create a new account on the secondary node and start mining:

> personal.newAccount("insecure_password")
"0x8b6e1d3dac361256c371b308c8e3be42fe4b9dc2"
> miner.start();

Run the following code to have access to a function to check ether balances:

function checkAllBalances() { 
    var i =0; 
    eth.accounts.forEach( function(e){
        console.log("  eth.accounts["+i+"]: " +  e + " \tbalance: " + web3.fromWei(eth.getBalance(e), "ether") + " ether"); 
        i++; 
    })
};

After the DAG has been generated and after about 5 minutes of mining you should have a non-zero ether balance:

> checkAllBalances()
eth.accounts[0]: 0x8b6e1d3dac361256c371b308c8e3be42fe4b9dc2 balance: 5 ether

Let’s send some ether.

On the first node type the following:

> personal.unlockAccount(eth.accounts[0], "insecure_password")
true
> eth.sendTransaction({ from: eth.accounts[0], to: "0x8b6e1d3dac361256c371b308c8e3be42fe4b9dc2", value: web3.toWei(1, "ether")});
I0531 12:06:30.232783 eth/api.go:1193] Tx(0x67cd98d4599602d2b99a7a63193aa23c6170f9177203f5a841d736ce62ffa0d8) to: 0x8b6e1d3dac361256c371b308c8e3be42fe4b9dc2
"0x67cd98d4599602d2b99a7a63193aa23c6170f9177203f5a841d736ce62ffa0d8"

Please note that you will need to regularly use personal.unlockAccount as your account locks itself quite frequently. You can pass a third parameter to unlockAccount which is the duration to have the account unlocked however this has not been tested.
Wait for a block to be mined to confirm the transaction and check the balances on each node:

Node 1:

> checkAllBalances()
eth.accounts[0]: 0x160b182bc3fed971000a05a5dc9eff821ad63f21 balance: 17.999116405924692 ether

Node 2:

> checkAllBalances()
eth.accounts[0]: 0x8b6e1d3dac361256c371b308c8e3be42fe4b9dc2 balance: 7.000883594075308 ether

We can now move on to creating our own crypto currency.

Creating a cryptocurrency called Innovions

I’ve used an online contract compiler, https://ethereum.github.io/browser-solidity/, to compile the Ethereum contracts.

My Innovions contract (currency) is as follows:

contract Innovions {

    address public controller;

    /* This creates an array with all balances */
    mapping (address => uint256) public balanceOf;

    function Innovions() {
        controller = msg.sender;
    }

    function transferController(address _to) {
        
        if (msg.sender != controller) {
            throw;
        }
        
        controller = _to;
    }
    
    function transfer(address _to, uint256 _value) {
        
        /* Check if sender has balance and for overflows */
        if (balanceOf[msg.sender] < _value || balanceOf[_to] + _value < balanceOf[_to])
        {
            throw;
        }

        /* Add and subtract new balances */
        balanceOf[msg.sender] -= _value;
        balanceOf[_to] += _value;
        
    }
    
    function issue(uint256 _value)
    {
        if (msg.sender != controller) {
            throw;
        }
        
        balanceOf[msg.sender] += _value;
    }
}

Copy the contract to the online compiler and under the Web3 Deploy section will be code similiar to the following that is used to create the contract:

var innovionsContract = web3.eth.contract([
    {"constant":true,"inputs":[{"name":"","type":"address"}],"name":"balanceOf","outputs":[{"name":"","type":"uint256"}],"type":"function"},
    {"constant":false,"inputs":[{"name":"_to","type":"address"},{"name":"_value","type":"uint256"}],"name":"transfer","outputs":[],"type":"function"},
    {"constant":false,"inputs":[{"name":"_value","type":"uint256"}],"name":"issue","outputs":[],"type":"function"},
    {"constant":false,"inputs":[{"name":"_to","type":"address"}],"name":"transferController","outputs":[],"type":"function"},
    {"constant":true,"inputs":[],"name":"controller","outputs":[{"name":"","type":"address"}],"type":"function"},
    {"inputs":[],"type":"constructor"}]);

var innovions = innovionsContract.new(
    {
        from: web3.eth.accounts[0],
        data: '60606040525b33600060006101000a81548173ffffffffffffffffffffffffffffffffffffffff021916908302179055505b6103a68061003f6000396000f360606040526000357c01000000000000000000000000000000000000000000000000000000009004806370a0823114610065578063a9059cbb14610091578063cc872b66146100b2578063e8ea054b146100ca578063f77c4791146100e257610063565b005b61007b600480803590602001909190505061011b565b6040518082815260200191505060405180910390f35b6100b06004808035906020019091908035906020019091905050610136565b005b6100c86004808035906020019091905050610259565b005b6100e060048080359060200190919050506102f5565b005b6100ef6004805050610380565b604051808273ffffffffffffffffffffffffffffffffffffffff16815260200191505060405180910390f35b60016000506020528060005260406000206000915090505481565b80600160005060003373ffffffffffffffffffffffffffffffffffffffff1681526020019081526020016000206000505410806101d25750600160005060008373ffffffffffffffffffffffffffffffffffffffff1681526020019081526020016000206000505481600160005060008573ffffffffffffffffffffffffffffffffffffffff1681526020019081526020016000206000505401105b156101dc57610002565b80600160005060003373ffffffffffffffffffffffffffffffffffffffff16815260200190815260200160002060008282825054039250508190555080600160005060008473ffffffffffffffffffffffffffffffffffffffff1681526020019081526020016000206000828282505401925050819055505b5050565b600060009054906101000a900473ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff163373ffffffffffffffffffffffffffffffffffffffff161415156102b557610002565b80600160005060003373ffffffffffffffffffffffffffffffffffffffff1681526020019081526020016000206000828282505401925050819055505b50565b600060009054906101000a900473ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff163373ffffffffffffffffffffffffffffffffffffffff1614151561035157610002565b80600060006101000a81548173ffffffffffffffffffffffffffffffffffffffff021916908302179055505b50565b600060009054906101000a900473ffffffffffffffffffffffffffffffffffffffff168156',
        gas: 3000000
    }, function(e, contract) {
        console.log(e, contract);

        if (typeof contract.address != 'undefined') {
            console.log('Contract mined! address: ' + contract.address + ' transactionHash: ' + contract.transactionHash);
        }
    });

Run the following code on the first node. This will make the first node the “royal mint” for the new currency.

Once the create contract transaction has been mined. We will receive the contracts address on the console:

Contract mined! address: 0x0636d186816c37bb1292725fba6f110c85c7c8d6 transactionHash: 0x33c73a0a314bc0ddf3a129452bc3457ade067ec7427ce9fc95bddd3446530b3f

So that we can interact with the contract on the second node type the following on the second node:

var innovions = web3.eth.contract([
    {"constant":true,"inputs":[{"name":"","type":"address"}],"name":"balanceOf","outputs":[{"name":"","type":"uint256"}],"type":"function"},
    {"constant":false,"inputs":[{"name":"_to","type":"address"},{"name":"_value","type":"uint256"}],"name":"transfer","outputs":[],"type":"function"},
    {"constant":false,"inputs":[{"name":"_value","type":"uint256"}],"name":"issue","outputs":[],"type":"function"},
    {"constant":false,"inputs":[{"name":"_to","type":"address"}],"name":"transferController","outputs":[],"type":"function"},
    {"constant":true,"inputs":[],"name":"controller","outputs":[{"name":"","type":"address"}],"type":"function"},
    {"inputs":[],"type":"constructor"}]).at("0x0636d186816c37bb1292725fba6f110c85c7c8d6");

Make sure to replace “0x0636d186816c37bb1292725fba6f110c85c7c8d6” with the contract address

Performing a transaction with Innovions

For the first transaction we will issue 100 Innovions to the royal mint. On the first node execute the following:

>innovions.issue(100, { from: eth.accounts[0] });

Once the issue transaction has been mined we can check the balance of all parties from either node:

> innovions.balanceOf("0x160b182bc3fed971000a05a5dc9eff821ad63f21")
100
> innovions.balanceOf("0x8b6e1d3dac361256c371b308c8e3be42fe4b9dc2")
0

Next from the first node, let’s transfer 25 Innovions to the second node:

>innovions.transfer("0x8b6e1d3dac361256c371b308c8e3be42fe4b9dc2", 25, { from: eth.accounts[0] });

Once the transaction has been mined we can check the balances on either node:

> innovions.balanceOf("0x160b182bc3fed971000a05a5dc9eff821ad63f21")
75
> innovions.balanceOf("0x8b6e1d3dac361256c371b308c8e3be42fe4b9dc2")
25

Summary

From start to finish we have created a private Ethereum block chain and created a smart contract (The crypto currency in this instance) on that block chain. We have demonstrated mining and transactions. Hopefully this will serve as a rapid introduction to a standard use case of Ethereum and give you the confidence to take your next steps.

17 Apr

Automated Bitbucket repository backups without a dedicated user account

Recently I’ve been using Bitbucket as part of a new team I’ve been collaborating with. It’s a relatively small team containing 5 members. Bitbucket hosts over 100 of the teams repositories and so they are backed up nightly by a chron job on a NAS server.

Bitbucket only allows up to 5 users per team before you have to start paying for it’s services. I replaced one of the existing team members so when I joined, the old team member’s account was disassociated with the team on Bitbucket and my account was associated with the team. This allowed the team to stay within it’s 5 user limit.

As it turned out the backup script was using the old team members credentials to make the backups and so the backups began to fail. It could easily be fixed by changing the hard coded credentials to another team members account. This approach however would just push the problem down the line and we would be hit again when other team members rolled on and off the collaboration.

Some of you may be thinking why not just add a team SSH key and have the script use that. It’s correct that we can use a team SSH key to perform a git clone on our repositories however we must know of all our repositories ahead of time. This would mean every time we created a repository we would have to add it to our backup script. If we want to use the Bitbucket API to automatically find all the team repositories then a team SSH key is not enough.

Bitbucket also offers a team API key, which is basically just an account that can be used to act as the team. The team name (username) and API key (password) would have been enough to get the backups working and to keep it working. There are a few problems I see with this.

  • If the API key is ever exposed, every application which uses it will need to be updated.
  • It grants far too many permissions to things that don’t need it. (A backup script should only have read-only access).
  • There is no accountability. If all the clients are using the same credentials then how do you know which one performed an action?

To get around these limitations I decided to use the OAuth option offered by Bitbucket. I wrote a script which can be installed by running:

npm install -g bitbucket-backup-oauth

Once installed you can run the backup script by including the following in your scripts or from the command line:

bitbucket-backup-oauth --owner principal [--backupFolder ./backup/] [--consumerKey client_id] [--consumerSecret client_secret]

The only mandatory parameter is the owner parameter. If the script cannot find locally stored client credntials (consumer key and secret) then you will be prompted for them. The consumer key and secret and associated setup is detailed below.

Bitbucket Setup

You will need to setup an OAuth consumer on Bitbucket first. Go to manage team and then on the left hand side menu there will be OAuth option under Access Management.

836ee22f-01

Under the OAuth Consumers section click Add consumer. Fill in a name, select read access for repositories and set the callback URL to http://localhost/cb (it can be anything you want as it won’t be used with the OAuth flow we use other than the initial authorisation) and then finally click save.

836ee22f-02

Go back to the OAuth Consumers section and you will now have a consumer key (client id) and consumer secret (client password).

836ee22f-03

You will need to authorize the OAuth consumer to have access to your repositories. To do this you will need to use your browser to go to the following URL:

https://bitbucket.org/site/oauth2/authorize?client_id={client_id}&response_type=code

Replace {client_id} with your consumer key setup in the previous step.

If you are not already logged in you will be asked to login. You will be presented with a screen asking for you to authorize the consumer:

836ee22f-04

Click grant access. You will be redirected to the call back URL localhost/cb. You will get a 404 but this does not matter. Authorisation has been granted and the consumer key and secret can be used with the back up script.

Benefits

Using the OAuth method addresses my concerns with the API key method above. In audit trails we will know it is the backup consumer by the logs. We can revoke access at any time if we know the consumer key or secret has been compromised. The credentials are only given the permissions it needs to do it’s job (read access to repositories).

20 Mar

Boot ArchLinux from USB on the Raspberry Pi

If you have ever wanted to store your root file system on a USB stick rather than an SD card then the following will describe how to do so.

You can’t get rid of the SD card completely as the Raspberry Pi is designed to boot from an SD card however you only need to store the “boot” files on the SD card as this is what the Raspberry Pi will attempt to load when powered on.

The usual instructions for installing Arch on the Raspberry Pi is to format the SD card with two partitions using a DOS (MBR) partitioning format. The first partition is recommended to be 100 MB in size and is formatted using the FAT file system. The second partition takes up the rest of the space on the SD card and is formatted using the standard Linux Ext4 file system.

Once the partitions are created and file systems formatted then we download the latest Arch Linux package for the Raspberry Pi (I’m using the Raspberry Pi 2 package here) (http://archlinuxarm.org/os/ArchLinuxARM-rpi-2-latest.tar.gz). Extract this archive to the main Ext4 partition on the SD card and then move the “boot” files from /mnt/p2/boot/ to /mnt/p1 assuming that the 100 MB FAT partition is mounted at /mnt/p1 and the Ext4 partition is mounted at /mnt/p2.

This works because the “boot” files on the first partition point to the second partition containing the root file system.

All we need to do is install the root file system on a USB drive and change the “boot” files on the SD card to point to our root file system partition on the USB drive.

The following assumes that on the Linux system used to prepare the USB and SD card that /dev/sdb is the USB drive and /dev/sdc is the SD card.

  1. Start fdisk to partition the SD card:
    fdisk /dev/sdc
  2. At the fdisk prompt, delete old partitions and create a new one:
    1. Type o. This will create a new MBR partition table at the beginning of the SD card.
    2. Type n to create a new partition.
    3. Make it a primary partition by typing p.
    4. Type 1 for the first partition on the SD card and press ENTER twice to accept the default first and last sector.
    5. Type t, then c to set the first partition to type W95 FAT32 (LBA).
    6. Save the changes and exit by typing w.
  3. Start fdisk to partition the USB drive:
    fdisk /dev/sdb
  4. At the fdisk prompt, delete old partitions and create a new one:
    1. Type o. This will create a new MBR partition table at the beginning of the USB drive.
    2. Type n to create a new partition.
    3. Make it a primary partition by typing p.
    4. Type 1 for the first partition on the USB drive and press ENTER twice to accept the default first and last sector.
    5. Save the changes and exit by typing w.
  5. Create and mount the FAT filesystem:
    mkfs.vfat /dev/sdc1
    mkdir boot
    mount /dev/sdc1 boot
  6. Create and mount the ext4 filesystem:
    mkfs.ext4 /dev/sdb1
    mkdir root
    mount /dev/sdb1 root
  7. Download and extract the root filesystem (as root, not via sudo):
    wget http://archlinuxarm.org/os/ArchLinuxARM-rpi-2-latest.tar.gz
    bsdtar -xpf ArchLinuxARM-rpi-2-latest.tar.gz -C root
    sync
  8. Move boot files to the first partition:
    mv root/boot/* boot
  9. Modify the file boot/cmdline.txt so that the root= kernel option specifies the first partition on the USB device. It should look similar to this:
    root=/dev/sda1 rw rootwait console=ttyAMA0,115200 console=tty1 selinux=0 plymouth.enable=0 smsc95xx.turbo_mode=N dwc_otg.lpm_enable=0 kgdboc=ttyAMA0,115200 elevator=noop

    If no other USB drives are to be plugged into the Raspberry Pi then the USB device is likely to get assigned /dev/sda.

  10. Unmount the two partitions:
    umount boot root
  11. Insert the SD card and USB drive into the Raspberry Pi and switch it on

The problem with the above is that you cannot guarantee that the USB drive will be assigned /dev/sda and this is especially the case when using more than one USB drive with the Pi.

To make this more reliable then when partitioning the USB drive in the above you should use gdisk to create a GPT based partition table. This way the partition can be referenced using a UUID and then modifying cmdline.txt so that the root kernel option is in the form root=PARTUUID=4673c3fe-9bab-476c-88c5-65e2c842f72a.

07 Feb

Signing your PowerShell Scripts

At some point or another, we have all been in the situation, usually on a new machine, we get the familiar PSSecurityException.

.\xxx.ps1 : File C:\xxx.ps1 cannot be loaded. The file C:\xxx.ps1 is not digitally signed. You cannot run this
script on the current system. For more information about running scripts and setting execution policy, see
about_Execution_Policies at http://go.microsoft.com/fwlink/?LinkID=135170.
At line:1 char:1
+ .\xxx.ps1
+ ~~~~~~~~~~
    + CategoryInfo          : SecurityError: (:) [], PSSecurityException
    + FullyQualifiedErrorId : UnauthorizedAccess

Of course we know this is down to PowerShell Policy. The behaviour of PowerShell when it executes a script can be configured to one of the following:

  • Restricted – This is the most secure but the least versatile. It does not allow any scripts to be run.
  • AllSigned – This allows scripts that have been digitally signed by a certificate that the computer trusts to run.
  • Unrestricted – This is the least secure. It allows all scripts regardless of origin to run.
  • RemoteSigned – This is a step up from Unrestricted. Scripts that have been downloaded are not allowed to run unless they are digitally signed by a certificate that the computer trusts.

The default policy depends on a number of things including group policy and which operating system you are using. A lot of people get around this by setting the execution policy to Unrestricted.

Set-ExecutionPolicy Unrestricted

I however do not like this as it leaves your system open to abuse.

RemoteSigned should be the minimum you ever set this policy to. Even with RemoteSigned you may find you still get the above error when your script is located on a network drive. In this case rather than changing the execution policy to Unrestricted you should change your internet options so the server that the file share is hosted on is trusted. To do this you should go to Control Panel and Internet Options to open the Internet Options dialog:

eecef1e1-3f9f-47fa-b0d5-c61b61b910cf_01

Under Internet Options select the Security tab and then the Local Intranet zone. Once selected click Sites then Advanced. Add the server to the list in the form file://<servername> (untick Require server verification). Once this has been updated restart any PowerShell windows and try executing your script again on a network share and you should find they run.

However I think the best approach is to sign your scripts. The argument against this is that it’s a “complicated” process involving certificates which people seem to find hard anyway.

It is true that you will need to get a code signing certificate and if your IT Support has set things up correctly you should be able to get a code signing certificate in about 30 seconds (see Create a Code Signing Certificate in 30 Seconds). Sometimes your IT Support will not have configured your infrastructure for maximum usability and security and so you may need to expend a bit more effort, however this is an infrequent process with big advantages for security.

If you have a code signing certificate then signing your scripts is incredibly easy. Just substitute the certificate thumbprint with your own:

$cert = Get-ChildItem cert:\CurrentUser\My\FACE9812CAFE7634BABE54561A2B3C4D5E6FDEAD
Set-AuthenticodeSignature -Certificate $cert -FilePath X:\path_to_script.ps1

This will add a block like the following to the end of your script:

# SIG # Begin signature block
# MIIInAYJKoZIhvcNAQcCoIIIjTCCCIkCAQExCzAJBgUrDgMCGgUAMGkGCisGAQQB
......................
# NNDblcNY9HDwe+tXuDRtu3YLoBWBq4xasQTml46HHtEd1z6L+qTDi1gwbv/dFmIB
# SIG # End signature block

This is the digital signature and will be what PowerShell verifies when trying to run a script. See how easy that was?

Top tip

My home directory is hosted on a network server and so my PowerShell profile file is hosted on the network share. This means by default, without signing my PowerShell profile, every time I load PowerShell I will get a message:

. : File \\domain.com\FILES\USERS\my.username\My Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1
cannot be loaded. The file \\domain.com\FILES\USERS\my.username\My
Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1 is not digitally signed. You cannot run this script on
the current system. For more information about running scripts and setting execution policy, see
about_Execution_Policies at http://go.microsoft.com/fwlink/?LinkID=135170.
At line:1 char:3
+ . '\\domain.com\FILES\USERS\my.username\My Documents\WindowsPowerShell\Micr ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : SecurityError: (:) [], PSSecurityException
    + FullyQualifiedErrorId : UnauthorizedAccess

It was easy enough to sign my profile file:

$cert = Get-ChildItem cert:\CurrentUser\My\FACE9812CAFE7634BABE54561A2B3C4D5E6FDEAD
Set-AuthenticodeSignature -Certificate $cert -FilePath $profile

The problem is that I frequently add, remove or edit things in my profile which usually leads to the following error message:

. : File \\domain.com\FILES\USERS\my.username\My Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1
cannot be loaded. The contents of file \\domain.com\FILES\USERS\my.username\My
Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1 might have been changed by an unauthorized user or
process, because the hash of the file does not match the hash stored in the digital signature. The script cannot run
on the specified system. For more information, run Get-Help about_Signing..
At line:1 char:3
+ . '\\domain.com\FILES\USERS\my.username\My Documents\WindowsPowerShell\Micr ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : SecurityError: (:) [], PSSecurityException
    + FullyQualifiedErrorId : UnauthorizedAccess

This is due to the file being changed so the digital signature is no longer valid. It’s easy enough to correct by signing the file again:

$cert = Get-ChildItem cert:\CurrentUser\My\FACE9812CAFE7634BABE54561A2B3C4D5E6FDEAD
Set-AuthenticodeSignature -Certificate $cert -FilePath $profile

but I find this tedious as looking up my thumbprint and then typing the path is more effort than I’m willing to spend. I always usually have a PowerShell window open even when I am editing my profile and so to save me some time I have defined the following function in my profile:

function Update-ProfileSignature()
{
	$signingCertificateThumbprint = (Get-AuthenticodeSignature $profile).SignerCertificate.Thumbprint	
	$codeSigningCertificate = Get-ChildItem -Recurse Cert:\ | ? { $_.Thumbprint -eq $signingCertificateThumbprint -and $_.HasPrivateKey } | Select -First 1
	Set-AuthenticodeSignature -FilePath $profile -Certificate $codeSigningCertificate | Out-Null

	Write-Host "Updated signature on profile" -ForegroundColor Green
}

Everytime I edit my profile, in an already open PowerShell window, I execute the function Update-ProfileSignature which re-signs my profile file.

It’s important that a PowerShell window is already open (and therefore has the Update-ProfileSignature function loaded and defined) as once you edit your profile you cannot load your profile without error into any new PowerShell windows until it is re-signed and therefore you cannot call Update-ProfileSignature until it is re-signed, which would make it useless.

06 Feb

Create a Code Signing Certificate in 30 Seconds

Have you ever needed a code signing certificate to sign a Powershell script or other piece of software within your organisation?

One of the great things about Enterprise PKI within an Active Directory environment is the ability to generate certificates for all manor of different purposes. You might want a certificate for S/MIME email, an SSL certificate for an internal web server or just a code signing certificate for internal software or scripts. All these tasks can take as little as 30 seconds.

I am going to take you through getting a code signing certificate that can be used to sign your software or scripts within your organisation.

Open up the Microsoft Management Console (mmc.exe)1af15819-6aca-438c-9def-0cd7ff5dddf1_01

Go to File > Add/Remove Snap-in… and add the Certificates snap-in.1af15819-6aca-438c-9def-0cd7ff5dddf1_02

Open up Certificates – Current User node and then on the Personal node right click and go to All Tasks > Request New Certificate…1af15819-6aca-438c-9def-0cd7ff5dddf1_03

You will see the certificate enrollment wizard appear. Click Next1af15819-6aca-438c-9def-0cd7ff5dddf1_04

Select the Active Directory Enrollment Policy and click Next. (If the AD enrollment policy doesn’t appear then your computer isn’t in a domain environment where IT support have setup an Enterprise PKI environment and unfortunately you will need to use a different method)1af15819-6aca-438c-9def-0cd7ff5dddf1_05

Select the Code Signing enrollment policy and click Enroll. (If the code signing enrollment policy isn’t available then your system support have decided not to allow you to request code signing certificates using the Enterprise PKI and you will have to find a different method)1af15819-6aca-438c-9def-0cd7ff5dddf1_06

You should hopefully get a success message.1af15819-6aca-438c-9def-0cd7ff5dddf1_07

A new certificate should have appeared in the certificates snap-in of the Microsoft Management Console under the personal node.

You can now use this certificate to sign software or scripts.