News
4/29/2010
09:38 AM
George Crump
George Crump
Commentary
Connect Directly
RSS
E-Mail
50%
50%

Fixing Storage Utilization Without A Refresh

In the final part of our storage utilization series we address how to improve storage utilization without refreshing the storage itself. This is, unfortunately, the most difficult way to improve storage utilization.

In the final part of our storage utilization series we address how to improve storage utilization without refreshing the storage itself. This is, unfortunately, the most difficult way to improve storage utilization.It is much easier to start from a clean slate and a storage system designed to maintain high levels of utilization. The first step in improving utilization of an existing system is to see if it is even worth it.

One of the goal of higher utilization is to make sure that if drives and shelves are installed and using power they have enough data on them to justify the power and space consumption as well as their cost. As mentioned in the other entries in this series, storage utilization needs to be addressed in two areas. The first and most often thought of is increasing efficiency of stored data via compression, deduplication or migration to archival storage.

These methods only work on actual data. The larger issue often is storage capacity that is assigned to particular servers but not in use, the capacity is captive. It is that room for growth that you assign to each server as it connects to the storage system.

The first method, making data already stored consume less space, will work on existing storage systems as well as new storage systems. The problem is that this often then compounds the second issue of captive storage by making even more space available but not in use. The most viable way to address the second problem is to implement a thinly provisioned storage system or a storage system that can very easily expand capacity on existing volumes.

If your current storage system can't do this, there are add-on solutions that can provide it for you. This can be delivered either through a NAS software/gateway solution or an external software based storage virtualization application. From that point forward new volumes you create can be thin provisioned. While this does not optimize what you have underutilized now you can at least stop the problem from getting worse.

Dealing with existing captive capacity though is a bigger challenge and unfortunately also where the utilization issue is typically the worst. What makes this difficult is that best practices for most array systems is to create volumes where the drives within that volume span vertically across several shelves in the array. Even if you could just shrink the size of the volume, the drives would still be in use. Nothing could be powered off, nothing would be saved.

The best way to address this would be to implement a "thin aware" file system that can also do "thin migrations" along with the new thin provisioning software. Then you could define a new volume that uses fewer physical drives and shelves. Leveraging a thin migration, the ability to only copy actual data, then migrate the data from one volume to another. With the right file system this migration could be done live without having an impact on users. Depending on space available you may have to migrate just a few volumes at a time. As you offline volumes you should be able to begin to shut down drive shelves.

This may seem like a lot of work to increase utilization. It is. The payoff though can be racks of storage that are no longer needed as well as all the other benefits that the modernized storage software and thin provisioned file system will deliver. In some cases the power savings alone or the freeing up of power for other systems can justify the effort.

Track us on Twitter: http://twitter.com/storageswiss

Subscribe to our RSS feed.

George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.

Comment  | 
Print  | 
More Insights
Register for Dark Reading Newsletters
White Papers
Cartoon
Current Issue
Flash Poll
Threat Intel Today
Threat Intel Today
The 397 respondents to our new survey buy into using intel to stay ahead of attackers: 85% say threat intelligence plays some role in their IT security strategies, and many of them subscribe to two or more third-party feeds; 10% leverage five or more.
Video
Slideshows
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2014-0485
Published: 2014-09-02
S3QL 1.18.1 and earlier uses the pickle Python module unsafely, which allows remote attackers to execute arbitrary code via a crafted serialized object in (1) common.py or (2) local.py in backends/.

CVE-2014-3861
Published: 2014-09-02
Cross-site scripting (XSS) vulnerability in CDA.xsl in HL7 C-CDA 1.1 and earlier allows remote attackers to inject arbitrary web script or HTML via a crafted reference element within a nonXMLBody element.

CVE-2014-3862
Published: 2014-09-02
CDA.xsl in HL7 C-CDA 1.1 and earlier allows remote attackers to discover potentially sensitive URLs via a crafted reference element that triggers creation of an IMG element with an arbitrary URL in its SRC attribute, leading to information disclosure in a Referer log.

CVE-2014-5076
Published: 2014-09-02
The La Banque Postale application before 3.2.6 for Android does not prevent the launching of an activity by a component of another application, which allows attackers to obtain sensitive cached banking information via crafted intents, as demonstrated by the drozer framework.

CVE-2014-5136
Published: 2014-09-02
Cross-site scripting (XSS) vulnerability in Innovative Interfaces Sierra Library Services Platform 1.2_3 allows remote attackers to inject arbitrary web script or HTML via unspecified parameters.

Best of the Web
Dark Reading Radio
Archived Dark Reading Radio
This episode of Dark Reading Radio looks at infosec security from the big enterprise POV with interviews featuring Ron Plesco, Cyber Investigations, Intelligence & Analytics at KPMG; and Chris Inglis & Chris Bell of Securonix.