Knowing Your Recovery Will Work, Understanding Images
In my <a href="http://www.informationweek.com/blog/main/archives/2010/05/knowing_that_yo_1.html">last entry</a> the idea of image based backup was introduced as a way to improve recovery confidence. If you take the advice of the <a href="http://www.informationweek.com/blog/main/archives/2010/05/knowing_that_yo.html">first entry</a> in this series and focus on service level agreements (SLA) instead of backups you can narrow down the truly critical machines that you know must be recovered. With im
In my last entry the idea of image based backup was introduced as a way to improve recovery confidence. If you take the advice of the first entry in this series and focus on service level agreements (SLA) instead of backups you can narrow down the truly critical machines that you know must be recovered. With image level backups you can start these systems as virtual machines weekly as potentially the ultimate in recovery verification.Image backup discussion though generates concerns that we'll address in this entry. The typical misconceptions about image based backup are that they can't do incremental restores, they can't do point in time restores, their slow, they waste disk space and there is no tape-out functionality. While many of these concerns used to be valid, most have been addressed either by the backup applications themselves or through the combination of the backup software and storage hardware.
An image based backup used to look at the raw image that held the server, whether that was an image backup of a standalone hard disk or the image backup of a virtual machine. When image backups first began to gain in popularity was when file servers with millions of files on them began to appear. It was easier to back up the whole image than to walk through and check each file. The technology evolved and it became obvious that this was a quick way to recover an entire server. Again it was easier to transfer one big file or image than it was to transfer many little files. The weakness was trying to get a single file from within that image; most applications could not do that, so a separate file level backup would have to be run. Also using these backups for point in time like recoveries meant storing multiple copies of the whole server image, repeatedly week after week. For the most part you only need the whole server image for emergency recoveries which typically come from the most recent backup.
Backup software applications began to mature and were able to peer into the image backups to do individual file recoveries. Further they began to be able to provide the ability to only transfer changed files or blocks to the backup target to optimize not only the bandwidth used but also the capacity required to hold multiple versions of the server. This became known as block level incremental backup. To deliver historical reference to data most image backups used a snapshot like technique to hold the backup server's information at a point in time, maintaining a referenceable point in time. Operating systems and hypervisors have begun to help out by being able to communicate directly to the backup application what blocks have changed since the last backup. This allows for greater efficiency and speed since the file system does not need to be scanned for block changes.
Finally tape-out has been addressed by these applications. The traditional backup applications that have added image level backups treat the move to tape as merely a copy job. Applications that started as image only technologies need to add this capability. Although a workaround can be made easily today with software that presents tape as a file system. In this scenario you simply have a small disk front end and the file system automatically will migrate files to tape based on policies you set.
100% confidence in recoveries comes from testing them repeatedly to make sure they will work. The challenge is finding the time and resources to test those recoveries. Image backups make this process of starting up a server significantly easier and server virtualization makes it cost effective. Leveraging SLAs at the front end allows you to focus on only the systems whose recovery will make or break the enterprise.
Track us on Twitter: http://twitter.com/storageswiss
Subscribe to our RSS feed.
George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.
About the Author
You May Also Like
DevSecOps/AWS
Oct 17, 2024Social Engineering: New Tricks, New Threats, New Defenses
Oct 23, 202410 Emerging Vulnerabilities Every Enterprise Should Know
Oct 30, 2024Simplify Data Security with Automation
Oct 31, 2024Unleashing AI to Assess Cyber Security Risk
Nov 12, 2024