Fixing Storage Utilization Without A Refresh

In the final part of our storage utilization series we address how to improve storage utilization without refreshing the storage itself. This is, unfortunately, the most difficult way to improve storage utilization.

George Crump, President, Storage Switzerland

April 29, 2010

3 Min Read

In the final part of our storage utilization series we address how to improve storage utilization without refreshing the storage itself. This is, unfortunately, the most difficult way to improve storage utilization.It is much easier to start from a clean slate and a storage system designed to maintain high levels of utilization. The first step in improving utilization of an existing system is to see if it is even worth it.

One of the goal of higher utilization is to make sure that if drives and shelves are installed and using power they have enough data on them to justify the power and space consumption as well as their cost. As mentioned in the other entries in this series, storage utilization needs to be addressed in two areas. The first and most often thought of is increasing efficiency of stored data via compression, deduplication or migration to archival storage.

These methods only work on actual data. The larger issue often is storage capacity that is assigned to particular servers but not in use, the capacity is captive. It is that room for growth that you assign to each server as it connects to the storage system.

The first method, making data already stored consume less space, will work on existing storage systems as well as new storage systems. The problem is that this often then compounds the second issue of captive storage by making even more space available but not in use. The most viable way to address the second problem is to implement a thinly provisioned storage system or a storage system that can very easily expand capacity on existing volumes.

If your current storage system can't do this, there are add-on solutions that can provide it for you. This can be delivered either through a NAS software/gateway solution or an external software based storage virtualization application. From that point forward new volumes you create can be thin provisioned. While this does not optimize what you have underutilized now you can at least stop the problem from getting worse.

Dealing with existing captive capacity though is a bigger challenge and unfortunately also where the utilization issue is typically the worst. What makes this difficult is that best practices for most array systems is to create volumes where the drives within that volume span vertically across several shelves in the array. Even if you could just shrink the size of the volume, the drives would still be in use. Nothing could be powered off, nothing would be saved.

The best way to address this would be to implement a "thin aware" file system that can also do "thin migrations" along with the new thin provisioning software. Then you could define a new volume that uses fewer physical drives and shelves. Leveraging a thin migration, the ability to only copy actual data, then migrate the data from one volume to another. With the right file system this migration could be done live without having an impact on users. Depending on space available you may have to migrate just a few volumes at a time. As you offline volumes you should be able to begin to shut down drive shelves.

This may seem like a lot of work to increase utilization. It is. The payoff though can be racks of storage that are no longer needed as well as all the other benefits that the modernized storage software and thin provisioned file system will deliver. In some cases the power savings alone or the freeing up of power for other systems can justify the effort.

Track us on Twitter: http://twitter.com/storageswiss

Subscribe to our RSS feed.

George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.

About the Author(s)

George Crump

President, Storage Switzerland

George Crump is president and founder of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. With 25 years of experience designing storage solutions for datacenters across the US, he has seen the birth of such technologies as RAID, NAS, and SAN. Prior to founding Storage Switzerland, he was CTO at one the nation’s largest storage integrators, where he was in charge of technology testing, integration, and product selection. George is responsible for the storage blog on InformationWeek's website and is a regular contributor to publications such as Byte and Switch, SearchStorage, eWeek, SearchServerVirtualizaiton, and SearchDataBackup.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights