Our <a href="http://www.informationweek.com/blog/main/archives/2010/04/increasing_stor.html">last entry</a> covered ways to increase storage utilization. There are three options; live with under-utilization (easy but costly), refresh your current storage (easy but potentially expensive) or making what you have more efficient (potentially time consuming but potentially inexpensive). Most data centers have a schedule to refresh their current storage systems at some point in the future. In this ent

George Crump, President, Storage Switzerland

April 21, 2010

4 Min Read

Our last entry covered ways to increase storage utilization. There are three options; live with under-utilization (easy but costly), refresh your current storage (easy but potentially expensive) or making what you have more efficient (potentially time consuming but potentially inexpensive). Most data centers have a schedule to refresh their current storage systems at some point in the future. In this entry we will look at ways to make that move sooner than normal.Poor storage utilization has little to do with deduplication or compression; these techniques optimize capacity that is already in use. Poor utilization is capacity that has been allocated to a specific server but does not have actual data on it. If there is no data it can't be compressed or deduplicated, there is nothing there. It just represents many drives that are spinning, consuming power and space all captive to that single server.

This is where the expense of under-utilization comes in. The cost to power and house these drives that at the present are not being used to anywhere close to their fullest. The problem is you simply just can not turn these drives off. It is also unlikely that MAID will help here either. In most cases when you lay out a volume in a shared storage system, you stripe data across drives and most often those drives are in different shelves within the array. This helps with availability but it makes it hard to power manage. The result is fragments of data scattered across these drives and one access of that data means they all have to be turned on and working.

The way to reduce this need is to allocate fewer drives through thin provisioning or, at a minimum, an array that can dynamically add capacity so easily that it acts like thin provisioning. Ideally the array should be able to reclaim capacity as well as files that are deleted from it. We willl cover this more in our next entry about what to look for in a storage refresh.

The way to justify an early refresh is at the heart of this problem. You have to be able to buy a new system that actually needs less total capacity at the point of purchase then what you have now. If you fall into that typical 25% to 30% utilization category then it should be significantly less. Then with either thin provisioning or very careful capacity allocation, only assign what is really currently in use by each server, not what you think it will grow to.

Another justification is being able to use denser drives and with a virtualized storage system be able to use all of those drives simultaneously across all the attaching servers. This wide stripping provides the performance you need to these servers by keeping per server spindle count high enough while allowing you to use higher capacity drives which further reduces space and power consumption.

Finally look to leverage an archive strategy at the same time. So in addition to reducing the capacity that is allocated but not in use, you can also reduce the capacity actually being consumed by data. Moving old data off of primary storage reduces backup windows, improves retention control, improves replication windows and further reduces costs.If you can take a 50TB storage footprint and reduce it to 25TB through a new storage system with improved provisioning techniques and using denser hard drives you could potentially cut your power and space requirements by 75%. If you then archive the remaining active data you could reduce that remaining 25TB by another 50% or more. The net could be a 50TB system that shrinks to 12TBs of primary storage and 6TBs of archive storage (because the archive can be compressed and deduplicated).

Considering that every watt eliminated from the data center adds up to almost three watts in total savings this refresh can pay for itself very quickly just in power savings let alone easier management and the savings from buying less physical storage. Factor in the found floor space and the elimination of a future storage refresh, jumping on this now may be the best use of the IT budget.

Track us on Twitter: http://twitter.com/storageswiss

Subscribe to our RSS feed.

George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.

About the Author(s)

George Crump

President, Storage Switzerland

George Crump is president and founder of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. With 25 years of experience designing storage solutions for datacenters across the US, he has seen the birth of such technologies as RAID, NAS, and SAN. Prior to founding Storage Switzerland, he was CTO at one the nation’s largest storage integrators, where he was in charge of technology testing, integration, and product selection. George is responsible for the storage blog on InformationWeek's website and is a regular contributor to publications such as Byte and Switch, SearchStorage, eWeek, SearchServerVirtualizaiton, and SearchDataBackup.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights