Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.
Our <a href="http://www.informationweek.com/blog/main/archives/2011/03/dealing_with_re.html">last entry</a> introduced the concept of tiered recovery points. In this entry we will go into more detail about tiered recovery points. There are typically three types of recovery points you want; instant or close to it, also know as high availability. Within a few hours via some sort of disk or tape backup and finally recovering something old, an archive. Each of these tiers need to be established and
March 9, 2011
3 Min Read
Our last entry introduced the concept of tiered recovery points. In this entry we will go into more detail about tiered recovery points. There are typically three types of recovery points you want; instant or close to it, also know as high availability. Within a few hours via some sort of disk or tape backup and finally recovering something old, an archive. Each of these tiers need to be established and understood when they should be used.These three tiers of recovery; high availability (HA), backup and archive should have almost equal priority with each other and each needs to be established for all data sets and applications. High availability can range from seconds to a few minutes to less than an hour. It depends on when that application or service is expected to be returned to the user and how important it is to the organization.
The primary requirement of an HA solution is to have the capability of returning an application to production rapidly, without a data transfer. This does not require clustering to accomplish however. Clustering is often thought of as expensive and complicated. Third party application availability and replication products can provide this as well. The important component is that the data is in a secondary disk system in a live (not backup format) state. As we discuss in our recent article "Achieving Application Aware Availability" the need for high availability is broader what it used to be. The problem is that applications can't be down for the few hours it may take to transfer data, because of its size, from the backup device (disk or tape) back into production.
To accomplish transfer less recovery, HA requires mirroring or at least very fast replication of data. In some cases this means that data corruption or deletion can occur on primary storage, be mirrored or replicated to the secondary device before someone realizes that the corruption or deletion has occurred. While some HA solutions also take snapshots of data on the target side not all do, and even if they do the snapshots may get corrupted as well before you realize that corruption or accidental deletion has occurred. In other words we need a second tier of data protection in case something happens to the mirrored or replicated data.
While transfer times may cause you to not meet your ideal recovery time objective, faced with the alternative, total data loss, you'll be glad to have the secondary recovery tier. This tier is typically going to be some form of backup application and device. A backup is designed to store many versions of a data set that will exceed the versioning capabilities of most replication technologies. Also backups happen less frequently so the potential of corruption sneaking into multiple backup data sets is less likely. Still this second tier of backup needs to be focused on rapid recovery. Most often this means a disk based recovery on a high speed network.
Finally there is the archive, often tape based but also disk based. Ideally this is not a "when all else has failed" recovery tier. This recovery tier should more often be used to recover old versions of data as part of a legal action or when a project is re-activated. However, if the two prior tiers have failed it is good to know that this tier is available for a last ditch recovery effort.
Moving data between these tiers can be a challenge as well. In an upcoming entry we will discuss some options to automate that movement and to minimize redundant data copies.
Track us on Twitter: http://twitter.com/storageswiss
Subscribe to our RSS feed.
George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.
About the Author(s)
President, Storage Switzerland
George Crump is president and founder of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. With 25 years of experience designing storage solutions for datacenters across the US, he has seen the birth of such technologies as RAID, NAS, and SAN. Prior to founding Storage Switzerland, he was CTO at one the nation’s largest storage integrators, where he was in charge of technology testing, integration, and product selection. George is responsible for the storage blog on InformationWeek's website and is a regular contributor to publications such as Byte and Switch, SearchStorage, eWeek, SearchServerVirtualizaiton, and SearchDataBackup.
You May Also Like
A screen displaying many different types of charts and graphs to show what data is being analyzed.Cybersecurity Analytics