10 Nov

Resize the OS Disk in Resource Manager Azure Deployments using PowerShell

Recently I’ve had to increase the OS Disk size of about 10 Ubuntu virtual machines running on Microsoft Azure. Looking to the future I can also foresee this happening quite a few times so I thought I would write a script that could do this for me. The script essentially shuts the virtual machine down (after prompting you for confirmation), checks the new size is larger than the old size (an Azure requirement), resizes the disk to the new size and then starts the virtual machine back up.

There is a small quirk in the process that you will sometimes notice if you ever use the web UI to resize the OS disk. Azure sometimes doesn’t report the size of the disk correctly and so you don’t always know what the new size should be. To get around this I previously wrote a script (see here) which can get the disk size in an indirect way. Make sure you grab the retrieve size function from the linked post.

An example usage is the following:

Set-AzureRmOSDiskSize -VirtualMachineName "APP01" -ResourceGroupName "Platform-Dev" -SizeInGB 40

Here is the code below:

function Ask-ToContinue {

    param(
        [string]$message = $(throw "specify message"),
        [string]$prompt = $(throw "specify prompt")
    )

    $choices = New-Object Collections.ObjectModel.Collection[Management.Automation.Host.ChoiceDescription]
    $choices.Add((New-Object Management.Automation.Host.ChoiceDescription -ArgumentList '&Yes'))
    $choices.Add((New-Object Management.Automation.Host.ChoiceDescription -ArgumentList '&No'))

    $decision = $Host.UI.PromptForChoice($message, $prompt, $choices, 1)
    
    return $decision -eq 0

}

function Set-AzureRmOSDiskSize () {

    param(
        [string]$VirtualMachineName = $(throw "Specify virtual machine name"),
        [string]$ResourceGroupName = $(throw "Specify resource group name"),
        [int]$SizeInGB = (throw "Specify Size in GB")
    )

    $currentSize = Get-AzureRmOSDiskSize -VirtualMachineName $VirtualMachineName -ResourceGroupName $ResourceGroupName

    if ($currentSize -ne $null -and $SizeInGB -le $currentSize) {

        throw "Specified Disk Size is not larger than current size"

    }

    $VM = Get-AzureRmVM -ResourceGroupName $ResourceGroupName -Name $VirtualMachineName -Status

    if ($VM -eq $null) {

        throw "Virtual Machine not found"

    }

    $VMRunning = ($VM.Statuses | ? { $_.Code -eq "PowerState/running" } | Measure-Object | Select -ExpandProperty Count) -eq 1

    if ($VMRunning) {

        Write-Host "The VM is currently running." -ForegroundColor Magenta

        $stopTheVM = Ask-ToContinue -message "The VM must be stopped" -prompt "Would you like to stop the VM now?"

        if ($stopTheVM) {

            Write-Host -ForegroundColor Yellow "Stopping the VM"
            Stop-AzureRmVM -Name $VirtualMachineName -ResourceGroupName $ResourceGroupName -Force

        } else {

            Write-Host -ForegroundColor Cyan "Not stopping the VM."
            return

        }

    }

    $VM = Get-AzureRmVM -ResourceGroupName $ResourceGroupName -Name $VirtualMachineName
    $VM.StorageProfile.OsDisk.DiskSizeGB = $SizeInGB

    $result = Update-AzureRmVM -VM $VM -ResourceGroupName $ResourceGroupName

    if ($result.IsSuccessStatusCode) {
        
        Write-Host -ForegroundColor Green "Updated VM Successfully"

    }

    $startTheVM = Ask-ToContinue -message "The VM is currently stopped" -prompt "Would you like to start the VM now?"

    if ($startTheVM) {

        Write-Host -ForegroundColor Yellow "Starting the VM"
        Start-AzureRmVM -Name $VirtualMachineName -ResourceGroupName $ResourceGroupName

    }

}
17 Oct

Retrieve the size of the OS Disk in Resource Manager Azure Deployments using PowerShell

Recently I noticed that a script I was using wasn’t quite working correctly because it was failing to get the size of a Azure VM’s disk.

Usually I would use something similar to the following:

$VM = Get-AzureRmVM -ResourceGroupName $ResourceGroupName -Name $VirtualMachineName
$size = $VM.StorageProfile.OsDisk.DiskSizeGB

However $VM.StorageProfile.OsDisk.DiskSizeGB was null. I decided that I would write a function instead which would go and look at the underlying vhd blob and find the size that way. The below is my code and usage:

function Get-AzureRmVhdSize {

    param(
        [Uri] $uri = $(throw "Please enter a URI")
    )
	
    $blobEndpoint = $uri.Scheme + "://" + $uri.Host + "/"
    $sa = Get-AzureRmStorageAccount | ? { $_.PrimaryEndpoints.Blob -eq $blobEndpoint }

    if ($sa -eq $null -or $sa.Length -ne 1) {

        Throw "Unable to locate storage account"

    }

    $containerAndBlob = $uri.AbsolutePath.Split("/", [StringSplitOptions]::RemoveEmptyEntries)

    $blob = Get-AzureStorageBlob -Blob $containerAndBlob[1] -Container $containerAndBlob[0] -Context $sa.Context

    $sizeInBytes = $blob.Length
    $sizeInGB = [int]($sizeInBytes/1073741824)

    return $sizeInGB

}

function Get-AzureRmOSDiskSize {

    param(
        [string]$VirtualMachineName = $(throw "Specify virtual machine name"),
        [string]$ResourceGroupName = $(throw "Specify resource group name")
    )

    $VM = Get-AzureRmVM -ResourceGroupName $ResourceGroupName -Name $VirtualMachineName

    if ($VM -eq $null) {

        throw "Virtual Machine Not Found"

    }

    return Get-AzureRmVhdSize -uri $VM.StorageProfile.OsDisk.Vhd.Uri

}

Get-AzureRmOSDiskSize -VirtualMachineName testvm -ResourceGroupName testrg
20 Sep

Testing TCP connections with PowerShell

I’ve been in the situation where I have needed to test whether I could make a TCP connection from one Windows host to another Windows host to verify that a network team had indeed opened firewall ports. It seems like a trivial thing to do. Just connect from host A to host B on the specified port. What programs can we use to do this? It’s a bit overkill to install a whole piece of server and client software to test this. Let alone read any necessary documentation to configure the correct port. It gets worse if things don’t work as you still don’t know whether it’s the firewall or your configuration!

Linux users could just install Netcat on both hosts and check this in a few seconds. Window users still have the ability to install some networking utilities similar to Netcat but I find them overly complicated considering 99% of the time I just want to know whether an intermediate firewall is blocking a connection.

PowerShell is so useful and gives you the full power of the .NET framework. This means we can create these utilities ourselves natively without installing any third party libraries.

I’ve created two functions, Listen-Tcp and Connect-Tcp, which have code listsings at the bottom of the post. The following is an example use of the utilities:

Listen-Tcp -port <Int32>
Connect-Tcp -hostname <string> -port <Int32>

Running the corresponding scripts on two hosts, hosta and hostb will give you the following outputs:

hosta > Listen-Tcp -port 3000
Listening on port 3000
Stopped Listening
hostb > Connect-Tcp -hostname "hosta" -port 3000
Data sent to and received from target successfully

For convenience I have added these functions to my PowerShell profile so that they are available on all servers I log into within the domain.

Below is the code listing

function Listen-Tcp()
{
	param(
		[Int32] $port
	)
	
	$server = New-Object -TypeName System.Net.Sockets.TcpListener -ArgumentList @([System.Net.IPAddress]::Any, $port)
	$server.Start()
	
	Write-Host ("Listening on port {0}" -f $port)
	$clientSocket = $server.AcceptSocket()
	
	$buffer = New-Object -TypeName byte[] -ArgumentList 4
	$clientSocket.Receive($buffer) | Out-Null
	
	$clientSocket.Send($buffer) | Out-Null
	$clientSocket.Close()
	
	$server.Stop()
	
	Write-Host "Stopped Listening"
}
function Connect-Tcp()
{
	param(
		[string]$hostname,
		[Int32]$port
	)
	
	try
	{
		$client = New-Object -TypeName System.Net.Sockets.TcpClient -ArgumentList $hostname,$port
		$stream = $client.GetStream()
		
		$buffer = [System.Text.Encoding]::ASCII.GetBytes("EHLO")
		$stream.Write($buffer, 0, $buffer.Length)
		
		$receiveBuffer = New-Object -TypeName byte[] -ArgumentList $buffer.Length
		$stream.Read($receiveBuffer, 0, $receiveBuffer.Length) | Out-Null
		
		$receivedText = [System.Text.Encoding]::ASCII.GetString($receiveBuffer)
		
		$stream.Close()
		$client.Close()
		
		if ($receivedText -eq "EHLO") {
			Write-Host "Data sent to and received from target successfully"
		} else {
			Write-Host "Data receieved was not as expected"
		}
	} Catch [Exception]
	{
		Write-Host "Could not connect to target machine"
	}
}
22 Aug

Splitting and Joining Files with PowerShell

Sometimes it is useful to be able to split large files into smaller chunks. This can be because the file is bigger than a file limit size for a particular communication or storage medium. There is plenty of software that will do just that. To name a few 7-zip, WinZip and WinRAR.

However as I usually have my PowerShell profile synced to all my machines it is an easy task to do in PowerShell. I wrote some PowerShell functions a while ago that split and joined files. Here are a few examples of how they should be used and then the code follows at the bottom:

Split-File -filename .\fileToSplit.dat -outprefix splitFilePrefix -splitSize 2M
Join-File -filename .\splitFilePrefix.001 -outfile CopyOfFileToSplit.dat

You can specify the split size using the suffixes K, M and G for kilobytes, megabytes and gigabytes respectively.

Note that the file locations are relative to the processes current working directory and not PowerShell’s current location. To avoid confusion and strange behaviour use absolute paths. If you want to understand more about the difference then I recommend you check out this blog which came out near the top when googling for an insightful link, http://www.beefycode.com/post/The-Difference-between-your-Current-Directory-and-your-Current-Location.aspx

Here are the functions below:

function Split-File()
{
	param
	(
		[string] $filename = $(throw "file required"),
		[string] $outprefix = $(throw "outprefix required"),
		[string] $splitSize = "50M",
		[switch] $Quiet
	)
	
	$match = [System.Text.RegularExpressions.Regex]::Match($splitSize, "^(\d+)([BKMGbkmg]?)$")
	[int64]$size = $match.Groups[1].Value
	$sizeUnit = $match.Groups[2].Value.ToUpper()
	$sizeUnitValue = 0
	switch($sizeUnit)
	{
		"K" { $sizeUnitValue = 1024 }
		"M" { $sizeUnitValue = 1048576 }
		"G" { $sizeUnitValue = 1073741824 }
		default { $sizeUnitValue = 1 }
	}
	
	$size = $sizeUnitValue * $size
	
	Write-Host ("Size Split is {0}" -f $size) -ForegroundColor Magenta
	
	$outFilePrefix = [System.IO.Path]::Combine((Get-Location).Path, $outprefix)
	
	$inFileName = [IO.Path]::Combine((Get-Location).Path,$filename)
	
	Write-Host ("Input File full path is {0}" -f $inFileName)
	
	if ([IO.File]::Exists($inFileName) -ne $true)
	{
		Write-Host ("{0} does not exist" -f $inFileName) -ForegroundColor Red
		return
	}
	
	$bufferSize = 1048576
	
	$ifs = [IO.File]::OpenRead($inFileName)
	$ofs = $null
	$buffer = New-Object -typeName byte[] -ArgumentList $bufferSize
	$outFileCounter = 0
	$bytesReadTotal = 0
	
	$bytesRead = 1 #Non zero starting number to ensure loop entry
	while ($bytesRead -gt 0)
	{
		$bytesToRead = [Math]::Min($size-$bytesReadTotal, $bufferSize)
		$bytesRead = $ifs.Read($buffer, 0, $bytesToRead)
		
		if ($bytesRead -ne 0)
		{		
			if ($ofs -eq $null)
			{
				$outFileCounter++
				$ofsName = ("{0}.{1:D3}" -f $outFilePrefix,$outFileCounter)
				$ofs = [IO.File]::OpenWrite($ofsName)
				if ($Quiet -ne $true)
				{
					Write-Host ("Created file {0}" -f $ofsName) -ForegroundColor Yellow
				}
			}
			
			$ofs.Write($buffer, 0, $bytesRead)
			$bytesReadTotal += $bytesRead
			
			if ($bytesReadTotal -ge $size)
			{
				$ofs.Close()
				$ofs.Dispose()
				$ofs = $null
				$bytesReadTotal = 0
			}
		}
	}
	
	if ($ofs -ne $null)
	{
		$ofs.Close()
		$ofs.Dispose()
	}
	
	Write-Host "Finished"
	
	$ifs.Close()
	$ifs.Dispose()
}

function Join-File()
{
	param
	(
		[string] $filename = $(throw "filename required"),
		[string] $outfile	= $(throw "out filename required")
	)
	
	$outfilename = [IO.Path]::Combine((Get-Location).Path, $outfile)
	$ofs = [IO.File]::OpenWrite($outfilename)
	
	$match = [System.text.RegularExpressions.Regex]::Match([IO.Path]::Combine((Get-Location).Path,$filename), "(.+)\.\d+$")
	if ($match.Success -ne $true)
	{
		Write-Host "Unrecognised filename format" -FroegroundColor Red
	}
	$fileprefix = $match.Groups[1].Value
	$filecounter = 1
	$bufferSize = 1048576
	$buffer = New-Object -TypeName byte[] -ArgumentList $bufferSize
	
	while ([IO.File]::Exists(("{0}.{1:D3}" -f $fileprefix,$filecounter)))
	{
		$ifs = [IO.File]::OpenRead(("{0}.{1:D3}" -f $fileprefix,$filecounter))
		
		$bytesRead = $ifs.Read($buffer, 0, $bufferSize)
		while ($bytesRead -gt 0)
		{
			$ofs.Write($buffer,0,$bytesRead)
			$bytesRead = $ifs.Read($buffer, 0, $bufferSize)
		}		
		
		$ifs.Close()
		$ifs.Dispose()
	
		$filecounter++
	}
	
	$ofs.Close()
	$ofs.Dispose()

	Write-Host ("{0} created" -f $outfilename) -ForegroundColor Yellow
}
17 Apr

Automated Bitbucket repository backups without a dedicated user account

Recently I’ve been using Bitbucket as part of a new team I’ve been collaborating with. It’s a relatively small team containing 5 members. Bitbucket hosts over 100 of the teams repositories and so they are backed up nightly by a chron job on a NAS server.

Bitbucket only allows up to 5 users per team before you have to start paying for it’s services. I replaced one of the existing team members so when I joined, the old team member’s account was disassociated with the team on Bitbucket and my account was associated with the team. This allowed the team to stay within it’s 5 user limit.

As it turned out the backup script was using the old team members credentials to make the backups and so the backups began to fail. It could easily be fixed by changing the hard coded credentials to another team members account. This approach however would just push the problem down the line and we would be hit again when other team members rolled on and off the collaboration.

Some of you may be thinking why not just add a team SSH key and have the script use that. It’s correct that we can use a team SSH key to perform a git clone on our repositories however we must know of all our repositories ahead of time. This would mean every time we created a repository we would have to add it to our backup script. If we want to use the Bitbucket API to automatically find all the team repositories then a team SSH key is not enough.

Bitbucket also offers a team API key, which is basically just an account that can be used to act as the team. The team name (username) and API key (password) would have been enough to get the backups working and to keep it working. There are a few problems I see with this.

  • If the API key is ever exposed, every application which uses it will need to be updated.
  • It grants far too many permissions to things that don’t need it. (A backup script should only have read-only access).
  • There is no accountability. If all the clients are using the same credentials then how do you know which one performed an action?

To get around these limitations I decided to use the OAuth option offered by Bitbucket. I wrote a script which can be installed by running:

npm install -g bitbucket-backup-oauth

Once installed you can run the backup script by including the following in your scripts or from the command line:

bitbucket-backup-oauth --owner principal [--backupFolder ./backup/] [--consumerKey client_id] [--consumerSecret client_secret]

The only mandatory parameter is the owner parameter. If the script cannot find locally stored client credntials (consumer key and secret) then you will be prompted for them. The consumer key and secret and associated setup is detailed below.

Bitbucket Setup

You will need to setup an OAuth consumer on Bitbucket first. Go to manage team and then on the left hand side menu there will be OAuth option under Access Management.

836ee22f-01

Under the OAuth Consumers section click Add consumer. Fill in a name, select read access for repositories and set the callback URL to http://localhost/cb (it can be anything you want as it won’t be used with the OAuth flow we use other than the initial authorisation) and then finally click save.

836ee22f-02

Go back to the OAuth Consumers section and you will now have a consumer key (client id) and consumer secret (client password).

836ee22f-03

You will need to authorize the OAuth consumer to have access to your repositories. To do this you will need to use your browser to go to the following URL:

https://bitbucket.org/site/oauth2/authorize?client_id={client_id}&response_type=code

Replace {client_id} with your consumer key setup in the previous step.

If you are not already logged in you will be asked to login. You will be presented with a screen asking for you to authorize the consumer:

836ee22f-04

Click grant access. You will be redirected to the call back URL localhost/cb. You will get a 404 but this does not matter. Authorisation has been granted and the consumer key and secret can be used with the back up script.

Benefits

Using the OAuth method addresses my concerns with the API key method above. In audit trails we will know it is the backup consumer by the logs. We can revoke access at any time if we know the consumer key or secret has been compromised. The credentials are only given the permissions it needs to do it’s job (read access to repositories).