One thing suppliers and analysts are quick to point out is that when it comes to data protection it is not about how well you backup, it is about how well you recover. That sounds very catchy and for the most part is accurate. I believe however, that backup is an equally important part of the data protection puzzle. It is after all poor backup strategies that make recovery so hard and unpredictable.First, you must have something to recover for the recovery effort to work. No data, no recovery. Second, you must have that data where you need it. If your primary site is down, and you need to start recovering at your DR site, this is the wrong time to find out that the data didn't make it there or that it takes your vaulting service three hours to deliver your data back to you. I hate to say it but this is where real world testing comes in. One strategy I have seen used to really test your DR plan is to have someone else execute it that is not on your team. For example, find a trusted storage VAR that knows your products but not your company, can they follow your plan, recover your data and get you up and running? Having someone outside the company execute your recovery plan may be reality if disaster really does strike.
An effective recovery effort involves knowing how your backup worked locally. Do you have backup reporting tools like those from Tek-Tools and APTARE that can give you an accurate, snapshot overview of the backup process? Again, knowing your backup also means knowing that you will probably have more than one data protection application. Having a single tool that can report across all your different data protection processes (snapshots, backup, replication) gives you this "forewarned is forearmed" knowledge.
Once you know the data is backed up locally you need to also know that the backup is available remotely or at the DR site. This requires knowing how you are going to get your data off-site. The replication capabilities of deduplication products like Data Domain, Exagrid and Nexsan may be more responsible for their success in the market place than the local on-premise storage capacity savings that they enable. As we detail in our article "Deduplication Means Affordable DR", the deduplication process is what enables entire backup jobs to be replicated across thin WAN connections to DR sites. For businesses that don't have a secondary site that is suitable to replicate to, we are seeing companies like Simply Continuous begin to offer to host the deduplication target. Leveraging a cloud based model to offer recovery as a service.
The investment in the data protection process is almost always focused on getting the backups done faster. Attention needs to turn to make sure they are also done reliably and that those backups are positioned at the right location for recovery at the right time. Spending time upfront and then testing the process is critical for DR success.
Track us on Twitter: http://twitter.com/storageswiss
Subscribe to our RSS feed.
George Crump is founder of Storage Switzerland, an analyst firm focused on the virtualization and storage marketplaces. It provides strategic consulting and analysis to storage users, suppliers, and integrators. An industry veteran of more than 25 years, Crump has held engineering and sales positions at various IT industry manufacturers and integrators. Prior to Storage Switzerland, he was CTO at one of the nation's largest integrators.