News
10/21/2009
03:43 PM
George Crump
George Crump
Commentary
Connect Directly
RSS
E-Mail
50%
50%

Understanding Hard Drive Performance

In the last performance entries we discussed understanding storage bandwidth and understanding storage controllers. Next up is to understand the performance characteristics of the hard drive itself and how the mechanical hard drive can be the performance bottleneck.

In the last performance entries we discussed understanding storage bandwidth and understanding storage controllers. Next up is to understand the performance characteristics of the hard drive itself and how the mechanical hard drive can be the performance bottleneck.The big problem with hard drives is that they are stuck from a performance perspective. While everything else has increased in speed over the past few years, the fastest hard drives have been locked in at 15K RPM. What has kept mechanical drives viable in performance concerned data centers is the ability to group multiple drives together in an array group. Each drive in the array can respond to storage I/O requests. As long as you have enough requests, also known as queue depth, every time you add a drive to an array group performance improves.

The downside to large drive count array groups is that you often run into wasted capacity because so many drives are added to hit the performance requirement that the application can't use all the capacity that came along with the drive. Companies like 3PAR, Xiotech and Isilon get around this by performing fine grain virtualization, meaning they can stripe all the data from all the attached servers across all or most of the available drives in the storage system. This technique, known as wide striping, for many organizations strikes a balance between high performance and efficient capacity utilization.

At some point however you have a situation where you have added so many drives that it is not cost effective to do so or you run out of storage I/O requests and you have a response time or latency issue. The only step left is to speed up the drive itself.

Where are 20K RPM drives? While 20K RPM hard drives have been researched by various vendors, they have not been found to be viable for the market. There are problems with the heat, vibration and reliability issues with these faster drives. Cost is also an issue with 20K RPM hard drives. In fact they may be more expensive than an equivalent capacity Solid State Disk (SSD). For reasons we state in "SSD's are cost effective NOW", the downward price trend on SSDs seems to have eliminated the development of the 20K RPM drive market. With SSD you get better performance with none of the heat and vibration issues that were set to plague 20K drives. Finally SSDs, especially enterprise class SSD using SLC Flash, have proven to be as reliable as mechanical drives.

For many environments drive performance is not an issue and a simple array group of 10K or even 7.5K RPM drives provides all the performance their applications need. As performance becomes an issue, and storage controller and bandwidth issues are eliminated, then adding drive count is the next logical step, especially if you can leverage technologies like fine grain virtualization. Eventually the final stop is SSD.

Even in SSD their is likely to be tiers of service and you will want to use different SSD technologies for different SSD classes.

Track us on Twitter: http://twitter.com/storageswiss

Subscribe to our RSS feed.

George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.

Comment  | 
Print  | 
More Insights
Register for Dark Reading Newsletters
White Papers
Flash Poll
Current Issue
Cartoon
DevOps’ Impact on Application Security
DevOps’ Impact on Application Security
Managing the interdependency between software and infrastructure is a thorny challenge. Often, it’s a “developers are from Mars, systems engineers are from Venus” situation.
Video
Slideshows
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2012-6651
Published: 2014-07-31
Multiple directory traversal vulnerabilities in the Vitamin plugin before 1.1.0 for WordPress allow remote attackers to access arbitrary files via a .. (dot dot) in the path parameter to (1) add_headers.php or (2) minify.php.

CVE-2014-2970
Published: 2014-07-31
** REJECT ** DO NOT USE THIS CANDIDATE NUMBER. ConsultIDs: CVE-2014-5139. Reason: This candidate is a duplicate of CVE-2014-5139, and has also been used to refer to an unrelated topic that is currently outside the scope of CVE. This unrelated topic is a LibreSSL code change adding functionality ...

CVE-2014-3488
Published: 2014-07-31
The SslHandler in Netty before 3.9.2 allows remote attackers to cause a denial of service (infinite loop and CPU consumption) via a crafted SSLv2Hello message.

CVE-2014-3554
Published: 2014-07-31
Buffer overflow in the ndp_msg_opt_dnssl_domain function in libndp allows remote routers to cause a denial of service (crash) and possibly execute arbitrary code via a crafted DNS Search List (DNSSL) in an IPv6 router advertisement.

CVE-2014-5171
Published: 2014-07-31
SAP HANA Extend Application Services (XS) does not encrypt transmissions for applications that enable form based authentication using SSL, which allows remote attackers to obtain credentials and other sensitive information by sniffing the network.

Best of the Web
Dark Reading Radio