News
4/29/2010
09:38 AM
George Crump
George Crump
Commentary
Connect Directly
RSS
E-Mail
50%
50%

Fixing Storage Utilization Without A Refresh

In the final part of our storage utilization series we address how to improve storage utilization without refreshing the storage itself. This is, unfortunately, the most difficult way to improve storage utilization.

In the final part of our storage utilization series we address how to improve storage utilization without refreshing the storage itself. This is, unfortunately, the most difficult way to improve storage utilization.It is much easier to start from a clean slate and a storage system designed to maintain high levels of utilization. The first step in improving utilization of an existing system is to see if it is even worth it.

One of the goal of higher utilization is to make sure that if drives and shelves are installed and using power they have enough data on them to justify the power and space consumption as well as their cost. As mentioned in the other entries in this series, storage utilization needs to be addressed in two areas. The first and most often thought of is increasing efficiency of stored data via compression, deduplication or migration to archival storage.

These methods only work on actual data. The larger issue often is storage capacity that is assigned to particular servers but not in use, the capacity is captive. It is that room for growth that you assign to each server as it connects to the storage system.

The first method, making data already stored consume less space, will work on existing storage systems as well as new storage systems. The problem is that this often then compounds the second issue of captive storage by making even more space available but not in use. The most viable way to address the second problem is to implement a thinly provisioned storage system or a storage system that can very easily expand capacity on existing volumes.

If your current storage system can't do this, there are add-on solutions that can provide it for you. This can be delivered either through a NAS software/gateway solution or an external software based storage virtualization application. From that point forward new volumes you create can be thin provisioned. While this does not optimize what you have underutilized now you can at least stop the problem from getting worse.

Dealing with existing captive capacity though is a bigger challenge and unfortunately also where the utilization issue is typically the worst. What makes this difficult is that best practices for most array systems is to create volumes where the drives within that volume span vertically across several shelves in the array. Even if you could just shrink the size of the volume, the drives would still be in use. Nothing could be powered off, nothing would be saved.

The best way to address this would be to implement a "thin aware" file system that can also do "thin migrations" along with the new thin provisioning software. Then you could define a new volume that uses fewer physical drives and shelves. Leveraging a thin migration, the ability to only copy actual data, then migrate the data from one volume to another. With the right file system this migration could be done live without having an impact on users. Depending on space available you may have to migrate just a few volumes at a time. As you offline volumes you should be able to begin to shut down drive shelves.

This may seem like a lot of work to increase utilization. It is. The payoff though can be racks of storage that are no longer needed as well as all the other benefits that the modernized storage software and thin provisioned file system will deliver. In some cases the power savings alone or the freeing up of power for other systems can justify the effort.

Track us on Twitter: http://twitter.com/storageswiss

Subscribe to our RSS feed.

George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.

Comment  | 
Print  | 
More Insights
Register for Dark Reading Newsletters
White Papers
Flash Poll
Current Issue
Cartoon
DevOpsí Impact on Application Security
DevOpsí Impact on Application Security
Managing the interdependency between software and infrastructure is a thorny challenge. Often, itís a ďdevelopers are from Mars, systems engineers are from VenusĒ situation.
Video
Slideshows
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2014-0607
Published: 2014-07-24
Unrestricted file upload vulnerability in Attachmate Verastream Process Designer (VPD) before R6 SP1 Hotfix 1 allows remote attackers to execute arbitrary code by uploading and launching an executable file.

CVE-2014-1419
Published: 2014-07-24
Race condition in the power policy functions in policy-funcs in acpi-support before 0.142 allows local users to gain privileges via unspecified vectors.

CVE-2014-2360
Published: 2014-07-24
OleumTech WIO DH2 Wireless Gateway and Sensor Wireless I/O Modules allow remote attackers to execute arbitrary code via packets that report a high battery voltage.

CVE-2014-2361
Published: 2014-07-24
OleumTech WIO DH2 Wireless Gateway and Sensor Wireless I/O Modules, when BreeZ is used, do not require authentication for reading the site security key, which allows physically proximate attackers to spoof communication by obtaining this key after use of direct hardware access or manual-setup mode.

CVE-2014-2362
Published: 2014-07-24
OleumTech WIO DH2 Wireless Gateway and Sensor Wireless I/O Modules rely exclusively on a time value for entropy in key generation, which makes it easier for remote attackers to defeat cryptographic protection mechanisms by predicting the time of project creation.

Best of the Web
Dark Reading Radio
Archived Dark Reading Radio
Sara Peters hosts a conversation on Botnets and those who fight them.