You may ask yourself "why does a total data loss matter?". Fair question, this is after all backup data. First most backup administrators, armed with disk backup, now count on not having to perform as many full backups, instead running higher numbers of incremental, differentials or synthetic full daily backups. The full backup window may now be designed to happen once a quarter. The time saved by eliminating the weekly full backup has probably been absorbed by some other process. Total data loss is especially costly on backup deduplication systems since its efficiency depends on previous generations of files. A total failure means that the entire deduplication process would essentially need to start all over again.
Until drive manufacturers make drives that never fail, the key is for backup deduplication systems to get through this rebuild process sooner or to use a different process all together. RAID is RAID and the larger drives get the more work will be involved in the rebuild process. There are ways around this though. First you can throw more storage horsepower at the problem. While your are limited to drive mechanics the faster the parity calculations can be done the better. Another option is to not fail the entire drive but to use intelligence to mark out the bad section of the drive and keep on going.
Another option is to use a different data protection algorithm than RAID. There are erasure coding or Reed-Solomon techniques that may have better rebuild times. These and other techniques understand what blocks on a drive contain data and only does the rebuild for those blocks, again faster. The other option, probably least attractive in the disk backup space is mirroring since, again, it is trying to compete with tape.
A final option may be to actually use smaller, faster drives and then through backup virtualization, leverage tape to keep the size of the front end disk smaller. As we discussed in our recent article "Breaking The Top Four Myths Of Tape vs. Disk Backup" tape is not susceptible to the cost per GB scrutiny that disk is when it is used as part of the backup process. It may sound a little like turning back the clock. This small disk based cached backed by an increasingly reliable tape library or even as a front end to a deduplicated disk backend may be an ideal solution.
Additional Blogs in this Series:
Deduplication 2.0 - Recovery Performance Backup Deduplication 2.0 - Density Backup Deduplication 2.0 - Power Savings Backup Deduplication 2.0 - Integration
Track us on Twitter: http://twitter.com/storageswiss
Subscribe to our RSS feed.
George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.