Containing The Cost Of Keeping Data Forever - Capacity

As we stated when we began the <a href="http://www.informationweek.com/blog/main/archives/2010/06/revisiting_the.html">Keeping Data Forever strategy</a>, the reason we can even consider this as a viable strategy is because technology has provided us with solutions to the challenges associated with it. In this entry we will look at some of the ways to contain the costs associated with this strategy. We will look at containing the capacity costs.

George Crump, President, Storage Switzerland

July 12, 2010

3 Min Read
Dark Reading logo in a gray background | Dark Reading

As we stated when we began the Keeping Data Forever strategy, the reason we can even consider this as a viable strategy is because technology has provided us with solutions to the challenges associated with it. In this entry we will look at some of the ways to contain the costs associated with this strategy. We will look at containing the capacity costs.The first most obvious cost to contain is the cost to store all this information forever. There is little doubt that disk archive systems can now scale to meet the multi-petabyte capacity demands that a keep it forever strategy may entail, but you don't want to buy all that capacity today. Look for a way to add that capacity only as you need it and not before. While it's good that many storage systems can meet the capacity demands of a keep it forever strategy, that doesn't mean that you want to take advantage of it if you don't have to. You are going to want to curtail capacity growth as best you can, and that means capacity optimization is a key requirement.

Capacity optimization should come in at least two forms. One is compression and the other is deduplication. Deduplication, the ability to identify redundant data and only store that data once, captures all the attention. While it is important in the keep data forever strategy, there should not be the redundancy that there is in backup. As we stated in our article "Backup vs. Archive", backups send essentially the same data over and over again. Archive should be a one time event where data is archived one time, replicated to a redundant archive and then removed from primary storage. While certainly some duplication will exist, it will not deliver the same return on investment that backup deduplication will. While your mileage will vary, expect about a 3X to 5X reduction.

While compression does not have the same percentage gains that deduplication does when there is duplicate data, compression does work across almost all data, redundant or not. Gaining a 50% reduction on all files instead of a 300% reduction on a few files may provide greater savings. The best choice though is to combine the two techniques for maximum total reduction.

Tape as a means to contain capacity costs can not be left out of the capacity discussion. As we discuss in our article "What is LTFS?", IBM's new tape based file system for LTO makes tape more viable than ever for long term data retention. Tape and disk archive should no longer be looked at as competitors but complimentary to each other where disk fills the intermediate role of storing data for 3-7 years and tape stores data for the remainder of the time. There are several solutions that would support automatically moving data from disk to tape after a given timeframe.

Track us on Twitter: http://twitter.com/storageswiss

Subscribe to our RSS feed.

George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.

About the Author

George Crump

President, Storage Switzerland

George Crump is president and founder of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. With 25 years of experience designing storage solutions for datacenters across the US, he has seen the birth of such technologies as RAID, NAS, and SAN. Prior to founding Storage Switzerland, he was CTO at one the nation’s largest storage integrators, where he was in charge of technology testing, integration, and product selection. George is responsible for the storage blog on InformationWeek's website and is a regular contributor to publications such as Byte and Switch, SearchStorage, eWeek, SearchServerVirtualizaiton, and SearchDataBackup.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights