In the last <a href="http://www.informationweek.com/blog/main/archives/2009/10/understanding_s_2.html">performance entries</a> we discussed understanding storage bandwidth and understanding storage controllers. Next up is to understand the performance characteristics of the hard drive itself and how the mechanical hard drive can be the performance bottleneck.

George Crump, President, Storage Switzerland

October 21, 2009

3 Min Read

In the last performance entries we discussed understanding storage bandwidth and understanding storage controllers. Next up is to understand the performance characteristics of the hard drive itself and how the mechanical hard drive can be the performance bottleneck.The big problem with hard drives is that they are stuck from a performance perspective. While everything else has increased in speed over the past few years, the fastest hard drives have been locked in at 15K RPM. What has kept mechanical drives viable in performance concerned data centers is the ability to group multiple drives together in an array group. Each drive in the array can respond to storage I/O requests. As long as you have enough requests, also known as queue depth, every time you add a drive to an array group performance improves.

The downside to large drive count array groups is that you often run into wasted capacity because so many drives are added to hit the performance requirement that the application can't use all the capacity that came along with the drive. Companies like 3PAR, Xiotech and Isilon get around this by performing fine grain virtualization, meaning they can stripe all the data from all the attached servers across all or most of the available drives in the storage system. This technique, known as wide striping, for many organizations strikes a balance between high performance and efficient capacity utilization.

At some point however you have a situation where you have added so many drives that it is not cost effective to do so or you run out of storage I/O requests and you have a response time or latency issue. The only step left is to speed up the drive itself.

Where are 20K RPM drives? While 20K RPM hard drives have been researched by various vendors, they have not been found to be viable for the market. There are problems with the heat, vibration and reliability issues with these faster drives. Cost is also an issue with 20K RPM hard drives. In fact they may be more expensive than an equivalent capacity Solid State Disk (SSD). For reasons we state in "SSD's are cost effective NOW", the downward price trend on SSDs seems to have eliminated the development of the 20K RPM drive market. With SSD you get better performance with none of the heat and vibration issues that were set to plague 20K drives. Finally SSDs, especially enterprise class SSD using SLC Flash, have proven to be as reliable as mechanical drives.

For many environments drive performance is not an issue and a simple array group of 10K or even 7.5K RPM drives provides all the performance their applications need. As performance becomes an issue, and storage controller and bandwidth issues are eliminated, then adding drive count is the next logical step, especially if you can leverage technologies like fine grain virtualization. Eventually the final stop is SSD.

Even in SSD their is likely to be tiers of service and you will want to use different SSD technologies for different SSD classes.

Track us on Twitter: http://twitter.com/storageswiss

Subscribe to our RSS feed.

George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.

About the Author(s)

George Crump

President, Storage Switzerland

George Crump is president and founder of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. With 25 years of experience designing storage solutions for datacenters across the US, he has seen the birth of such technologies as RAID, NAS, and SAN. Prior to founding Storage Switzerland, he was CTO at one the nation’s largest storage integrators, where he was in charge of technology testing, integration, and product selection. George is responsible for the storage blog on InformationWeek's website and is a regular contributor to publications such as Byte and Switch, SearchStorage, eWeek, SearchServerVirtualizaiton, and SearchDataBackup.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights