informa
Commentary

Optimize Cloud Storage, Flash Storage And Deduplication

In our last entry we discussed the growing importance of efficiency. Tools and better storage systems can help make IT Administrators more efficient. The other option is to keep throwing new technology at the problem. Cloud Storage, Flash Storage and Deduplication are great examples.
In our last entry we discussed the growing importance of efficiency. Tools and better storage systems can help make IT Administrators more efficient. The other option is to keep throwing new technology at the problem. Cloud Storage, Flash Storage and Deduplication are great examples.While all these technologies can claim efficiency on the CapEX side of the equation, without proper tools and procedures they can't claim much from a OpEX standpoint. Cloud Storage can significantly reduce internal storage management costs, and as we discuss in our recent article on The Evolution of Data Archiving, for some customers it is an ideal target for archive storage. Without the tools to identify and move data to that archive the amount of work required to manually guess what data can be archived is too time consuming.

Storage systems play an enormous role in increasing storage efficiencies. Companies like 3PAR, HDS, NetApp and DataCore that provide virtualization removes the need to plan and manage LUNS. Storage is grouped into a pool and used as needed by the servers attaching to the storage. Thin provisioning helps with the CapEx of not having to allocate storage and not having to spend time planning exact LUN sizes. Multi-protocol provides flexibility to connect servers to storage by whatever means are appropriate and affordable.

Cloud storage providers need this operational flexibility to meet the unpredictable growth requirements that they may face. Users of cloud storage need equal flexability from their storage solutions to be able to migrate data from their primary systems to secondary systems. Tools like those from Tek-Tools, APTARE and others can focus on capacity management that will identify data that is valid to be migrated to the cloud.

Performance centric tools like those from Virtual Instruments and NetApp's SANscreen determine when you are ready for flash storage and which workload should be moved to flash storage. Also with continuous monitoring they can provide information when you move a workload OFF of SSD when the performance demand no longer justifies it. In SSD, because of the cost delta, unused capacity is unwelcomed as are applications that can't take advantage of the performance of the device. Monitoring allows you to keep the SSD full of application data that can take advantage of it.

The same goes for deduplication; it should not be applied universally, especially on primary storage. It should be applied where there is enough redundant data to justify any performance impact that may occur. An exception here is real time data compression like those offered by Storwize. It can be applied once, universally with no performance impact decreasing CapEX while not adversely effecting OpEX.

The net here as stated in my last post is staff efficiency through tools and systems has to come first or in parallel to any CapEX reducing initiatives but also that both initiatives are critical for 2009.

Track us on Twitter: http://twitter.com/storageswiss.

Subscribe to our RSS feed.

George Crump is founder of Storage Switzerland, an analyst firm focused on the virtualization and storage marketplaces. It provides strategic consulting and analysis to storage users, suppliers, and integrators. An industry veteran of more than 25 years, Crump has held engineering and sales positions at various IT industry manufacturers and integrators. Prior to Storage Switzerland, he was CTO at one of the nation's largest integrators.

Recommended Reading: