An Optimize Once Storage Optimization Strategy

Storage optimization technologies like compression and deduplication have reduced the capacity requirements of many processes within the data center, most noticeably backup. When these data sets need to move between storage types though much of this optimization is lost. For storage optimization to achieve broad adoption it must move beyond just saving hard drive space. It has to increase data center efficiency and only optimize once.

George Crump, President, Storage Switzerland

November 5, 2010

3 Min Read

Storage optimization technologies like compression and deduplication have reduced the capacity requirements of many processes within the data center, most noticeably backup. When these data sets need to move between storage types though much of this optimization is lost. For storage optimization to achieve broad adoption it must move beyond just saving hard drive space. It has to increase data center efficiency and only optimize once.Storage optimization can improve efficiency by moving less data between the different types of storage. If this happens not only is the capacity of the target storage better utilized but so is the network. In addition the target storage device is saved from the load of having to re-optimize a data set, meaning it can perform better on other functions. An optimize once methodology would greatly increase the scalability of the data center. This increase in infrastructure efficiency potentially has more value than capacity savings. After all, capacity is relatively inexpensive, additional networks are not, nor is the processor time on the storage array.

The problem with today's storage optimization providing efficiency as well as a capacity savings is that it is very compartmentalized today. When data needs to move from one storage type to another it often has to be inflated or unoptimized, moved to the device and then potentially re-optimized. This means that the source storage has spent processing resources on reinflating the data, the network has to handle the full data load and the processors on the target storage has to re-optimize the data a second time.

To some extent as storage vendors continue to adopt primary storage deduplication some of the re-deduplication can be reduced and potentially efficiency between those systems can increase. A copy between tiers of storage, especially when connected to the same storage controller, should not require a re-inflation of data. This does mean that you would be locked into the same vendor for all your storage needs unless some sort of open standard is agreed to within the storage community. As it stands right now though, storage suppliers that have primary storage deduplication in their systems and have deduplicated target devices typically have to re-inflate data before it is moved.

However, as we discussed in our recent article "What is Real-Time Data Compression" some appliance like systems can operate inline between different vendor's storage systems and provide the functionality today across a variety of platforms. It can move across networks and storage tiers in an optimized state. We expect this functionality to increase and we expect traditional storage vendors to provide a common optimization engine across their storage offerings. This is also an excellent opportunity for the stand alone storage virtualization appliance and software vendors to add storage optimization to their technology.

As storage optimization becomes more universally available and controlled by a single optimization engine the ability to optimize once moves closer to reality. With optimize once networks can handle many times more traffic than they do today without a measurable loss in performance. What you need to decide is should you get your optimization technology from a single vendor and hope they integrate or do you select a single neutral appliance that can optimize across different types and manufacturers of storage?

Track us on Twitter: http://twitter.com/storageswiss

Subscribe to our RSS feed.

George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.

About the Author(s)

George Crump

President, Storage Switzerland

George Crump is president and founder of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. With 25 years of experience designing storage solutions for datacenters across the US, he has seen the birth of such technologies as RAID, NAS, and SAN. Prior to founding Storage Switzerland, he was CTO at one the nation’s largest storage integrators, where he was in charge of technology testing, integration, and product selection. George is responsible for the storage blog on InformationWeek's website and is a regular contributor to publications such as Byte and Switch, SearchStorage, eWeek, SearchServerVirtualizaiton, and SearchDataBackup.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights