Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

News

4/21/2010
12:32 PM
George Crump
George Crump
Commentary
50%
50%

Justifying An Early Storage Refresh

Our last entry covered ways to increase storage utilization. There are three options; live with under-utilization (easy but costly), refresh your current storage (easy but potentially expensive) or making what you have more efficient (potentially time consuming but potentially inexpensive). Most data centers have a schedule to refresh their current storage systems at some point in the future. In this ent

Our last entry covered ways to increase storage utilization. There are three options; live with under-utilization (easy but costly), refresh your current storage (easy but potentially expensive) or making what you have more efficient (potentially time consuming but potentially inexpensive). Most data centers have a schedule to refresh their current storage systems at some point in the future. In this entry we will look at ways to make that move sooner than normal.Poor storage utilization has little to do with deduplication or compression; these techniques optimize capacity that is already in use. Poor utilization is capacity that has been allocated to a specific server but does not have actual data on it. If there is no data it can't be compressed or deduplicated, there is nothing there. It just represents many drives that are spinning, consuming power and space all captive to that single server.

This is where the expense of under-utilization comes in. The cost to power and house these drives that at the present are not being used to anywhere close to their fullest. The problem is you simply just can not turn these drives off. It is also unlikely that MAID will help here either. In most cases when you lay out a volume in a shared storage system, you stripe data across drives and most often those drives are in different shelves within the array. This helps with availability but it makes it hard to power manage. The result is fragments of data scattered across these drives and one access of that data means they all have to be turned on and working.

The way to reduce this need is to allocate fewer drives through thin provisioning or, at a minimum, an array that can dynamically add capacity so easily that it acts like thin provisioning. Ideally the array should be able to reclaim capacity as well as files that are deleted from it. We willl cover this more in our next entry about what to look for in a storage refresh.

The way to justify an early refresh is at the heart of this problem. You have to be able to buy a new system that actually needs less total capacity at the point of purchase then what you have now. If you fall into that typical 25% to 30% utilization category then it should be significantly less. Then with either thin provisioning or very careful capacity allocation, only assign what is really currently in use by each server, not what you think it will grow to.

Another justification is being able to use denser drives and with a virtualized storage system be able to use all of those drives simultaneously across all the attaching servers. This wide stripping provides the performance you need to these servers by keeping per server spindle count high enough while allowing you to use higher capacity drives which further reduces space and power consumption.

Finally look to leverage an archive strategy at the same time. So in addition to reducing the capacity that is allocated but not in use, you can also reduce the capacity actually being consumed by data. Moving old data off of primary storage reduces backup windows, improves retention control, improves replication windows and further reduces costs.If you can take a 50TB storage footprint and reduce it to 25TB through a new storage system with improved provisioning techniques and using denser hard drives you could potentially cut your power and space requirements by 75%. If you then archive the remaining active data you could reduce that remaining 25TB by another 50% or more. The net could be a 50TB system that shrinks to 12TBs of primary storage and 6TBs of archive storage (because the archive can be compressed and deduplicated).

Considering that every watt eliminated from the data center adds up to almost three watts in total savings this refresh can pay for itself very quickly just in power savings let alone easier management and the savings from buying less physical storage. Factor in the found floor space and the elimination of a future storage refresh, jumping on this now may be the best use of the IT budget.

Track us on Twitter: http://twitter.com/storageswiss

Subscribe to our RSS feed.

George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Manchester United Suffers Cyberattack
Dark Reading Staff 11/23/2020
As 'Anywhere Work' Evolves, Security Will Be Key Challenge
Robert Lemos, Contributing Writer,  11/23/2020
Cloud Security Startup Lightspin Emerges From Stealth
Kelly Sheridan, Staff Editor, Dark Reading,  11/24/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Write a Caption, Win an Amazon Gift Card! Click Here
Latest Comment: This comment is waiting for review by our moderators.
Current Issue
2021 Top Enterprise IT Trends
We've identified the key trends that are poised to impact the IT landscape in 2021. Find out why they're important and how they will affect you today!
Flash Poll
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2019-20934
PUBLISHED: 2020-11-28
An issue was discovered in the Linux kernel before 5.2.6. On NUMA systems, the Linux fair scheduler has a use-after-free in show_numa_stats() because NUMA fault statistics are inappropriately freed, aka CID-16d51a590a8c.
CVE-2020-29368
PUBLISHED: 2020-11-28
An issue was discovered in __split_huge_pmd in mm/huge_memory.c in the Linux kernel before 5.7.5. The copy-on-write implementation can grant unintended write access because of a race condition in a THP mapcount check, aka CID-c444eb564fb1.
CVE-2020-29369
PUBLISHED: 2020-11-28
An issue was discovered in mm/mmap.c in the Linux kernel before 5.7.11. There is a race condition between certain expand functions (expand_downwards and expand_upwards) and page-table free operations from an munmap call, aka CID-246c320a8cfe.
CVE-2020-29370
PUBLISHED: 2020-11-28
An issue was discovered in kmem_cache_alloc_bulk in mm/slub.c in the Linux kernel before 5.5.11. The slowpath lacks the required TID increment, aka CID-fd4d9c7d0c71.
CVE-2020-29371
PUBLISHED: 2020-11-28
An issue was discovered in romfs_dev_read in fs/romfs/storage.c in the Linux kernel before 5.8.4. Uninitialized memory leaks to userspace, aka CID-bcf85fcedfdd.