This is where the expense of under-utilization comes in. The cost to power and house these drives that at the present are not being used to anywhere close to their fullest. The problem is you simply just can not turn these drives off. It is also unlikely that MAID will help here either. In most cases when you lay out a volume in a shared storage system, you stripe data across drives and most often those drives are in different shelves within the array. This helps with availability but it makes it hard to power manage. The result is fragments of data scattered across these drives and one access of that data means they all have to be turned on and working.
The way to reduce this need is to allocate fewer drives through thin provisioning or, at a minimum, an array that can dynamically add capacity so easily that it acts like thin provisioning. Ideally the array should be able to reclaim capacity as well as files that are deleted from it. We willl cover this more in our next entry about what to look for in a storage refresh.
The way to justify an early refresh is at the heart of this problem. You have to be able to buy a new system that actually needs less total capacity at the point of purchase then what you have now. If you fall into that typical 25% to 30% utilization category then it should be significantly less. Then with either thin provisioning or very careful capacity allocation, only assign what is really currently in use by each server, not what you think it will grow to.
Another justification is being able to use denser drives and with a virtualized storage system be able to use all of those drives simultaneously across all the attaching servers. This wide stripping provides the performance you need to these servers by keeping per server spindle count high enough while allowing you to use higher capacity drives which further reduces space and power consumption.
Finally look to leverage an archive strategy at the same time. So in addition to reducing the capacity that is allocated but not in use, you can also reduce the capacity actually being consumed by data. Moving old data off of primary storage reduces backup windows, improves retention control, improves replication windows and further reduces costs.If you can take a 50TB storage footprint and reduce it to 25TB through a new storage system with improved provisioning techniques and using denser hard drives you could potentially cut your power and space requirements by 75%. If you then archive the remaining active data you could reduce that remaining 25TB by another 50% or more. The net could be a 50TB system that shrinks to 12TBs of primary storage and 6TBs of archive storage (because the archive can be compressed and deduplicated).
Considering that every watt eliminated from the data center adds up to almost three watts in total savings this refresh can pay for itself very quickly just in power savings let alone easier management and the savings from buying less physical storage. Factor in the found floor space and the elimination of a future storage refresh, jumping on this now may be the best use of the IT budget.
Track us on Twitter: http://twitter.com/storageswiss
Subscribe to our RSS feed.
George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.