The Primary Storage Temptation
As IT staffs get stretched even thinner, a challenge is arising in what to do with primary storage. There is still the need for more and more capacity, there are limited dollars to buy more of it, and there are limited resources to properly manage it.
As IT staffs get stretched even thinner, a challenge is arising in what to do with primary storage. There is still the need for more and more capacity, there are limited dollars to buy more of it, and there are limited resources to properly manage it.The temptation is to just buy more primary storage because it is relatively easy to add more storage to your existing infrastructure. It may even feel like it is easier and less expensive to add to primary storage than it would be to implement a data management process that would remove old data and store it on a disk-based archive like those that Permabit, Nexsan, or Copan Systems offer.
The obvious issue with just adding more storage is the cost of primary storage vs. the cost of secondary storage. Even if you decide to use SATA storage from your primary supplier, there is a greater expense typically with that storage than SATA storage from one of the other suppliers, plus you would still have to develop a strategy to move old data to the SATA tier.
The other challenge with leaving data on primary storage is that, for the most part, you have to leave it in its native state. While some compression and data reduction can be performed on primary storage, greater gains can be made on archive storage. Since there is reduced performance pressure on archive storage, steps can be taken to reduce the size of the actual storage needed. Most of the archive systems have some form of compression or deduplication built into them.
For advanced data reduction tools, companies such as Ocarina Networks offer solutions that can not only scan active storage for stagnant data, they can, before data movement, invoke a series of compressors and deduplication that will provide maximum capacity reduction on the files, then they can move this smaller file to the archive storage system. Since these tools work in the background on stagnant data, extra time can be taken for maximum reduction of the storage requirement, even on files that do not typically yield great data reduction results.
If the purchase cost of the archive storage is exponentially less expensive on a terabyte to terabyte basis, then via a variety of means the actual storage space required on that archive is reduced, and the overall cost of the secondary storage is less expensive. For example, if 10 TB's of stagnant data can be stored on 2 TB's of archive space, the savings in acquisition costs, power requirements, and reduction in backup window, plus enabling better data retention, make the investment in an archive process obvious.
In our next entry, we will discuss the problems that expanding primary storage creates beyond the cost factors.
Join us for our upcoming Webcast on Improving IT Efficiency.
Track us on Twitter: http://twitter.com/storageswiss.
Subscribe to our RSS feed.
George Crump is founder of Storage Switzerland, an analyst firm focused on the virtualization and storage marketplaces. It provides strategic consulting and analysis to storage users, suppliers, and integrators. An industry veteran of more than 25 years, Crump has held engineering and sales positions at various IT industry manufacturers and integrators. Prior to Storage Switzerland, he was CTO at one of the nation's largest integrators.
About the Author
You May Also Like
DevSecOps/AWS
Oct 17, 2024Social Engineering: New Tricks, New Threats, New Defenses
Oct 23, 202410 Emerging Vulnerabilities Every Enterprise Should Know
Oct 30, 2024Simplify Data Security with Automation
Oct 31, 2024