25 Jan

Creating an image file from a local directory on Linux

Sometimes I wish to create disk image files using a directory on my local computer. It’s an easy task with Linux, which can be accomplished with only a couple of commands.

The main scenario for when I wish to create a disk image is when creating a .img file that can be written straight to a SD card (for example Arch Linux for the Raspberry Pi comes as a tar.gz of the root filesystem rather than an image file) or used with QEMU as the hard disk.

The following steps (that I use for creating an Arch Linux SD card image file) will take you through creating an image file and writing the files you wish to it.

Create an 8GB image file:

dd if=/dev/zero of=newfile.img bs=1M count=8192

Partition the image file:

fdisk ./newfile.img

I usually create 2 partitions, a FAT partition of about 100M for the boot files and the rest of the image a ext4 for the root filesystem.

Mount the image file using the loopback device. (I used the -f switch to find the next available loop device rather than explicitly specifying one)

sudo losetup -Pf ./newfile.img

The image file should now be visible as a block device on /dev/loopX where X is 0 if you don’t have any other loop devices

Running the following should give similiar output as below:

[[email protected] blog]$ lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
loop0         7:0    0     1G  0 loop 
├─loop0p1   259:6    0   100M  0 loop 
└─loop0p2   259:7    0   933M  0 loop

Format the partitions

sudo mkfs.vfat /dev/loop0p1
sudo mkfs.ext4 /dev/loop0p2

Mount the partitions:

mkdir p1
mkdir p2
sudo mount /dev/loop0p1 p1
sudo mount /dev/loop0p2 p2

You can now copy anything across my moving your files into directories p1 or p2.

For example, as I usually create SD card images from the Arch Linux ARM root filesystem tar.gz. I do the following:

sudo bsdtar -xpf ArchLinuxARM-rpi-latest.tar.gz -C p2
sync
sudo mv p2/boot/* p1

Clean up:
At this point we just need to clean up and unmount the image file and detach the loop back device:

sudo umount p1 p2
sudo losetup -d /dev/loop0

You should now have a nicely prepared image file. You can also mount existing image files using losetup, as we did above.

10 Nov

Resize the OS Disk in Resource Manager Azure Deployments using PowerShell

Recently I’ve had to increase the OS Disk size of about 10 Ubuntu virtual machines running on Microsoft Azure. Looking to the future I can also foresee this happening quite a few times so I thought I would write a script that could do this for me. The script essentially shuts the virtual machine down (after prompting you for confirmation), checks the new size is larger than the old size (an Azure requirement), resizes the disk to the new size and then starts the virtual machine back up.

There is a small quirk in the process that you will sometimes notice if you ever use the web UI to resize the OS disk. Azure sometimes doesn’t report the size of the disk correctly and so you don’t always know what the new size should be. To get around this I previously wrote a script (see here) which can get the disk size in an indirect way. Make sure you grab the retrieve size function from the linked post.

An example usage is the following:

Set-AzureRmOSDiskSize -VirtualMachineName "APP01" -ResourceGroupName "Platform-Dev" -SizeInGB 40

Here is the code below:

function Ask-ToContinue {

    param(
        [string]$message = $(throw "specify message"),
        [string]$prompt = $(throw "specify prompt")
    )

    $choices = New-Object Collections.ObjectModel.Collection[Management.Automation.Host.ChoiceDescription]
    $choices.Add((New-Object Management.Automation.Host.ChoiceDescription -ArgumentList '&Yes'))
    $choices.Add((New-Object Management.Automation.Host.ChoiceDescription -ArgumentList '&No'))

    $decision = $Host.UI.PromptForChoice($message, $prompt, $choices, 1)
    
    return $decision -eq 0

}

function Set-AzureRmOSDiskSize () {

    param(
        [string]$VirtualMachineName = $(throw "Specify virtual machine name"),
        [string]$ResourceGroupName = $(throw "Specify resource group name"),
        [int]$SizeInGB = (throw "Specify Size in GB")
    )

    $currentSize = Get-AzureRmOSDiskSize -VirtualMachineName $VirtualMachineName -ResourceGroupName $ResourceGroupName

    if ($currentSize -ne $null -and $SizeInGB -le $currentSize) {

        throw "Specified Disk Size is not larger than current size"

    }

    $VM = Get-AzureRmVM -ResourceGroupName $ResourceGroupName -Name $VirtualMachineName -Status

    if ($VM -eq $null) {

        throw "Virtual Machine not found"

    }

    $VMRunning = ($VM.Statuses | ? { $_.Code -eq "PowerState/running" } | Measure-Object | Select -ExpandProperty Count) -eq 1

    if ($VMRunning) {

        Write-Host "The VM is currently running." -ForegroundColor Magenta

        $stopTheVM = Ask-ToContinue -message "The VM must be stopped" -prompt "Would you like to stop the VM now?"

        if ($stopTheVM) {

            Write-Host -ForegroundColor Yellow "Stopping the VM"
            Stop-AzureRmVM -Name $VirtualMachineName -ResourceGroupName $ResourceGroupName -Force

        } else {

            Write-Host -ForegroundColor Cyan "Not stopping the VM."
            return

        }

    }

    $VM = Get-AzureRmVM -ResourceGroupName $ResourceGroupName -Name $VirtualMachineName
    $VM.StorageProfile.OsDisk.DiskSizeGB = $SizeInGB

    $result = Update-AzureRmVM -VM $VM -ResourceGroupName $ResourceGroupName

    if ($result.IsSuccessStatusCode) {
        
        Write-Host -ForegroundColor Green "Updated VM Successfully"

    }

    $startTheVM = Ask-ToContinue -message "The VM is currently stopped" -prompt "Would you like to start the VM now?"

    if ($startTheVM) {

        Write-Host -ForegroundColor Yellow "Starting the VM"
        Start-AzureRmVM -Name $VirtualMachineName -ResourceGroupName $ResourceGroupName

    }

}
22 Aug

Splitting and Joining Files with PowerShell

Sometimes it is useful to be able to split large files into smaller chunks. This can be because the file is bigger than a file limit size for a particular communication or storage medium. There is plenty of software that will do just that. To name a few 7-zip, WinZip and WinRAR.

However as I usually have my PowerShell profile synced to all my machines it is an easy task to do in PowerShell. I wrote some PowerShell functions a while ago that split and joined files. Here are a few examples of how they should be used and then the code follows at the bottom:

Split-File -filename .\fileToSplit.dat -outprefix splitFilePrefix -splitSize 2M
Join-File -filename .\splitFilePrefix.001 -outfile CopyOfFileToSplit.dat

You can specify the split size using the suffixes K, M and G for kilobytes, megabytes and gigabytes respectively.

Note that the file locations are relative to the processes current working directory and not PowerShell’s current location. To avoid confusion and strange behaviour use absolute paths. If you want to understand more about the difference then I recommend you check out this blog which came out near the top when googling for an insightful link, http://www.beefycode.com/post/The-Difference-between-your-Current-Directory-and-your-Current-Location.aspx

Here are the functions below:

function Split-File()
{
	param
	(
		[string] $filename = $(throw "file required"),
		[string] $outprefix = $(throw "outprefix required"),
		[string] $splitSize = "50M",
		[switch] $Quiet
	)
	
	$match = [System.Text.RegularExpressions.Regex]::Match($splitSize, "^(\d+)([BKMGbkmg]?)$")
	[int64]$size = $match.Groups[1].Value
	$sizeUnit = $match.Groups[2].Value.ToUpper()
	$sizeUnitValue = 0
	switch($sizeUnit)
	{
		"K" { $sizeUnitValue = 1024 }
		"M" { $sizeUnitValue = 1048576 }
		"G" { $sizeUnitValue = 1073741824 }
		default { $sizeUnitValue = 1 }
	}
	
	$size = $sizeUnitValue * $size
	
	Write-Host ("Size Split is {0}" -f $size) -ForegroundColor Magenta
	
	$outFilePrefix = [System.IO.Path]::Combine((Get-Location).Path, $outprefix)
	
	$inFileName = [IO.Path]::Combine((Get-Location).Path,$filename)
	
	Write-Host ("Input File full path is {0}" -f $inFileName)
	
	if ([IO.File]::Exists($inFileName) -ne $true)
	{
		Write-Host ("{0} does not exist" -f $inFileName) -ForegroundColor Red
		return
	}
	
	$bufferSize = 1048576
	
	$ifs = [IO.File]::OpenRead($inFileName)
	$ofs = $null
	$buffer = New-Object -typeName byte[] -ArgumentList $bufferSize
	$outFileCounter = 0
	$bytesReadTotal = 0
	
	$bytesRead = 1 #Non zero starting number to ensure loop entry
	while ($bytesRead -gt 0)
	{
		$bytesToRead = [Math]::Min($size-$bytesReadTotal, $bufferSize)
		$bytesRead = $ifs.Read($buffer, 0, $bytesToRead)
		
		if ($bytesRead -ne 0)
		{		
			if ($ofs -eq $null)
			{
				$outFileCounter++
				$ofsName = ("{0}.{1:D3}" -f $outFilePrefix,$outFileCounter)
				$ofs = [IO.File]::OpenWrite($ofsName)
				if ($Quiet -ne $true)
				{
					Write-Host ("Created file {0}" -f $ofsName) -ForegroundColor Yellow
				}
			}
			
			$ofs.Write($buffer, 0, $bytesRead)
			$bytesReadTotal += $bytesRead
			
			if ($bytesReadTotal -ge $size)
			{
				$ofs.Close()
				$ofs.Dispose()
				$ofs = $null
				$bytesReadTotal = 0
			}
		}
	}
	
	if ($ofs -ne $null)
	{
		$ofs.Close()
		$ofs.Dispose()
	}
	
	Write-Host "Finished"
	
	$ifs.Close()
	$ifs.Dispose()
}

function Join-File()
{
	param
	(
		[string] $filename = $(throw "filename required"),
		[string] $outfile	= $(throw "out filename required")
	)
	
	$outfilename = [IO.Path]::Combine((Get-Location).Path, $outfile)
	$ofs = [IO.File]::OpenWrite($outfilename)
	
	$match = [System.text.RegularExpressions.Regex]::Match([IO.Path]::Combine((Get-Location).Path,$filename), "(.+)\.\d+$")
	if ($match.Success -ne $true)
	{
		Write-Host "Unrecognised filename format" -FroegroundColor Red
	}
	$fileprefix = $match.Groups[1].Value
	$filecounter = 1
	$bufferSize = 1048576
	$buffer = New-Object -TypeName byte[] -ArgumentList $bufferSize
	
	while ([IO.File]::Exists(("{0}.{1:D3}" -f $fileprefix,$filecounter)))
	{
		$ifs = [IO.File]::OpenRead(("{0}.{1:D3}" -f $fileprefix,$filecounter))
		
		$bytesRead = $ifs.Read($buffer, 0, $bufferSize)
		while ($bytesRead -gt 0)
		{
			$ofs.Write($buffer,0,$bytesRead)
			$bytesRead = $ifs.Read($buffer, 0, $bufferSize)
		}		
		
		$ifs.Close()
		$ifs.Dispose()
	
		$filecounter++
	}
	
	$ofs.Close()
	$ofs.Dispose()

	Write-Host ("{0} created" -f $outfilename) -ForegroundColor Yellow
}
17 Apr

Automated Bitbucket repository backups without a dedicated user account

Recently I’ve been using Bitbucket as part of a new team I’ve been collaborating with. It’s a relatively small team containing 5 members. Bitbucket hosts over 100 of the teams repositories and so they are backed up nightly by a chron job on a NAS server.

Bitbucket only allows up to 5 users per team before you have to start paying for it’s services. I replaced one of the existing team members so when I joined, the old team member’s account was disassociated with the team on Bitbucket and my account was associated with the team. This allowed the team to stay within it’s 5 user limit.

As it turned out the backup script was using the old team members credentials to make the backups and so the backups began to fail. It could easily be fixed by changing the hard coded credentials to another team members account. This approach however would just push the problem down the line and we would be hit again when other team members rolled on and off the collaboration.

Some of you may be thinking why not just add a team SSH key and have the script use that. It’s correct that we can use a team SSH key to perform a git clone on our repositories however we must know of all our repositories ahead of time. This would mean every time we created a repository we would have to add it to our backup script. If we want to use the Bitbucket API to automatically find all the team repositories then a team SSH key is not enough.

Bitbucket also offers a team API key, which is basically just an account that can be used to act as the team. The team name (username) and API key (password) would have been enough to get the backups working and to keep it working. There are a few problems I see with this.

  • If the API key is ever exposed, every application which uses it will need to be updated.
  • It grants far too many permissions to things that don’t need it. (A backup script should only have read-only access).
  • There is no accountability. If all the clients are using the same credentials then how do you know which one performed an action?

To get around these limitations I decided to use the OAuth option offered by Bitbucket. I wrote a script which can be installed by running:

npm install -g bitbucket-backup-oauth

Once installed you can run the backup script by including the following in your scripts or from the command line:

bitbucket-backup-oauth --owner principal [--backupFolder ./backup/] [--consumerKey client_id] [--consumerSecret client_secret]

The only mandatory parameter is the owner parameter. If the script cannot find locally stored client credntials (consumer key and secret) then you will be prompted for them. The consumer key and secret and associated setup is detailed below.

Bitbucket Setup

You will need to setup an OAuth consumer on Bitbucket first. Go to manage team and then on the left hand side menu there will be OAuth option under Access Management.

836ee22f-01

Under the OAuth Consumers section click Add consumer. Fill in a name, select read access for repositories and set the callback URL to http://localhost/cb (it can be anything you want as it won’t be used with the OAuth flow we use other than the initial authorisation) and then finally click save.

836ee22f-02

Go back to the OAuth Consumers section and you will now have a consumer key (client id) and consumer secret (client password).

836ee22f-03

You will need to authorize the OAuth consumer to have access to your repositories. To do this you will need to use your browser to go to the following URL:

https://bitbucket.org/site/oauth2/authorize?client_id={client_id}&response_type=code

Replace {client_id} with your consumer key setup in the previous step.

If you are not already logged in you will be asked to login. You will be presented with a screen asking for you to authorize the consumer:

836ee22f-04

Click grant access. You will be redirected to the call back URL localhost/cb. You will get a 404 but this does not matter. Authorisation has been granted and the consumer key and secret can be used with the back up script.

Benefits

Using the OAuth method addresses my concerns with the API key method above. In audit trails we will know it is the backup consumer by the logs. We can revoke access at any time if we know the consumer key or secret has been compromised. The credentials are only given the permissions it needs to do it’s job (read access to repositories).