What To Do With Too Much Storage Performance
I recently concluded a series that examined the components of the storage environment that can impact overall storage I/O performance. There was <a href="http://www.informationweek.com/blog/main/archives/2009/10/understanding_s_1.html">storage I/O bandwidth</a>, <a href="http://www.informationweek.com/blog/main/archives/2009/10/understanding_s_2.html.com">controllers</a> and <a href="http://www.informationweek.com/blog/main/archives/2009/10/understanding_h.html">drives</a>. What if you are like
I recently concluded a series that examined the components of the storage environment that can impact overall storage I/O performance. There was storage I/O bandwidth, controllers and drives. What if you are like many data centers and you don't need to wring out every drop of storage I/O performance from your storage infrastructure? What should you do with too much storage performance?For possibly the majority of data centers, default storage configurations provide all the storage and capacity that they will ever need. In fact the storage controllers or arrays sit at almost zero utilization throughout the day. Instead of letting the storage processor sit on the couch, put it to work.
The first option is to make that storage processor itself do more. For the average environment, other than the basics of serving up capacity and providing disk redundancy, the most you are asking the storage controller to do is to manage snapshots and possibly replication. There are solutions that will allow you to add deduplication or compression. These solutions save space but the compression - decompression cycle can chew up storage processing resources, potentially impacting performance. However if those storage resources are sitting idle you might as well use them and save some money in the process.
NAS based systems are typically where you'll see the ability to have the system do extra work. NetApp systems can perform deduplication and EMC's Celerra as well as SUN's ZFS can provide compression and deduplication. Block based (SAN) systems can also provide compression and thin provisioning as well as intelligent data placement. Regardless of SAN or NAS, all of these features put the storage processor to work and can save capacity.
We are even seeing this right sizing of performance in Solid State Disk (SSD). Companies like WhipTail Technologies are integrating inline deduplication into their SSD systems to increase the capacity of the SSD drive. It affects performance but if you need something more than mechanical drive performance but don't need the ultra high performance of standard SSD, this technology can provide a happy medium.
For companies where the current storage performance is fine, the requirement to upgrade these systems either comes with age or the need for a specific feature like snapshots or replication. The biggest motivation today however is the need for shared storage to support live migration in a server virtualization project. Storage services as a virtual machine is being offered by a growing number of suppliers and is something we will detail further in our next entry.
Track us on Twitter: http://twitter.com/storageswiss
Subscribe to our RSS feed.
George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.
About the Author
You May Also Like
Transform Your Security Operations And Move Beyond Legacy SIEM
Nov 6, 2024Unleashing AI to Assess Cyber Security Risk
Nov 12, 2024Securing Tomorrow, Today: How to Navigate Zero Trust
Nov 13, 2024The State of Attack Surface Management (ASM), Featuring Forrester
Nov 15, 2024Applying the Principle of Least Privilege to the Cloud
Nov 18, 2024