Friday, 16 March 2012

Looking at Livedrive transfer speeds on 100Mb broadband.

At home we have a number of computers and I have been improving the quality of the backup.  We have just had a VirginMedia 100Mb connection which the main attraction is the 10MB upload.  I also have a Zen connection.  The Zen is only an 8mb ADSL line although it has half the latency of the Virgin line.  (Virgin pings at about 15 msec and Zen at aroun 8 msec.)  I have been using Livedrive to carry out our own backup to the cloud.  I already had a local backup but I wanted the security of an offsite backup.  I have been interested in the difference between backing up small files and backing up larger files..  I have replaced our router which was a Dlink with a PFSense Alix 2D13 and things have been much more stable and reliable.  Some of the nice features that come bundled with the PFSense router is traffic logs.  So here is the effect of livedrive backing up:

Virgin media traffic backup

And if you look at the detail of the Live drive at the same time you can see we are just about getting (note pfSense graph above is in bits per seconds and the livedrive numbers below are in bytes per second):

Livedata data transfer

So the conclusion from this is that livedrive needs large file sizes to reach the capacity of the data link.  With a typical collection of files the actual data transfer speed is closer to 1.5 M bits per second.

Thursday, 23 February 2012

AWS conference

Just attended the AWS conference in London today to hear all about the new features Amazon are rolling out.  There are large number of initiatives to makes things easier and also to lock you into Amazon every more tightly.  From a disaster recovery point of view I liked the CC2 high computer instance that could have a 10GB Ethernet connect which coupled with the direct connect 10GB pipe allows you transfer roughly 1TB an hour directly to the data centre in Dublin.

My neighbour gave me a couple of interesting real  world disaster recovery stories where things had gone wrong.  The first was a typical scrip based backup to an offsite recovery site.  The problem was that the script had failed about six months before the disaster so when it was needed there was no backed up data.

The other example of preparation for a disaster recovery that went wrong was where the latest data was backed up regularly to CD and then the CD's were stored regularly in a fireproof save.  Come the disaster, the discs were all found to be blank.  It just goes to show how important actually testing your recovery process is.