News
10/21/2009
03:43 PM
George Crump
George Crump
Commentary
Connect Directly
RSS
E-Mail
50%
50%

Understanding Hard Drive Performance

In the last performance entries we discussed understanding storage bandwidth and understanding storage controllers. Next up is to understand the performance characteristics of the hard drive itself and how the mechanical hard drive can be the performance bottleneck.

In the last performance entries we discussed understanding storage bandwidth and understanding storage controllers. Next up is to understand the performance characteristics of the hard drive itself and how the mechanical hard drive can be the performance bottleneck.The big problem with hard drives is that they are stuck from a performance perspective. While everything else has increased in speed over the past few years, the fastest hard drives have been locked in at 15K RPM. What has kept mechanical drives viable in performance concerned data centers is the ability to group multiple drives together in an array group. Each drive in the array can respond to storage I/O requests. As long as you have enough requests, also known as queue depth, every time you add a drive to an array group performance improves.

The downside to large drive count array groups is that you often run into wasted capacity because so many drives are added to hit the performance requirement that the application can't use all the capacity that came along with the drive. Companies like 3PAR, Xiotech and Isilon get around this by performing fine grain virtualization, meaning they can stripe all the data from all the attached servers across all or most of the available drives in the storage system. This technique, known as wide striping, for many organizations strikes a balance between high performance and efficient capacity utilization.

At some point however you have a situation where you have added so many drives that it is not cost effective to do so or you run out of storage I/O requests and you have a response time or latency issue. The only step left is to speed up the drive itself.

Where are 20K RPM drives? While 20K RPM hard drives have been researched by various vendors, they have not been found to be viable for the market. There are problems with the heat, vibration and reliability issues with these faster drives. Cost is also an issue with 20K RPM hard drives. In fact they may be more expensive than an equivalent capacity Solid State Disk (SSD). For reasons we state in "SSD's are cost effective NOW", the downward price trend on SSDs seems to have eliminated the development of the 20K RPM drive market. With SSD you get better performance with none of the heat and vibration issues that were set to plague 20K drives. Finally SSDs, especially enterprise class SSD using SLC Flash, have proven to be as reliable as mechanical drives.

For many environments drive performance is not an issue and a simple array group of 10K or even 7.5K RPM drives provides all the performance their applications need. As performance becomes an issue, and storage controller and bandwidth issues are eliminated, then adding drive count is the next logical step, especially if you can leverage technologies like fine grain virtualization. Eventually the final stop is SSD.

Even in SSD their is likely to be tiers of service and you will want to use different SSD technologies for different SSD classes.

Track us on Twitter: http://twitter.com/storageswiss

Subscribe to our RSS feed.

George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.

Comment  | 
Print  | 
More Insights
Register for Dark Reading Newsletters
White Papers
Flash Poll
Current Issue
Cartoon
Threat Intel Today
Threat Intel Today
The 397 respondents to our new survey buy into using intel to stay ahead of attackers: 85% say threat intelligence plays some role in their IT security strategies, and many of them subscribe to two or more third-party feeds; 10% leverage five or more.
Video
Slideshows
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2014-3341
Published: 2014-08-19
The SNMP module in Cisco NX-OS 7.0(3)N1(1) and earlier on Nexus 5000 and 6000 devices provides different error messages for invalid requests depending on whether the VLAN ID exists, which allows remote attackers to enumerate VLANs via a series of requests, aka Bug ID CSCup85616.

CVE-2014-3464
Published: 2014-08-19
The EJB invocation handler implementation in Red Hat JBossWS, as used in JBoss Enterprise Application Platform (EAP) 6.2.0 and 6.3.0, does not properly enforce the method level restrictions for outbound messages, which allows remote authenticated users to access otherwise restricted JAX-WS handlers ...

CVE-2014-3472
Published: 2014-08-19
The isCallerInRole function in SimpleSecurityManager in JBoss Application Server (AS) 7, as used in Red Hat JBoss Enterprise Application Platform (JBEAP) 6.3.0, does not properly check caller roles, which allows remote authenticated users to bypass access restrictions via unspecified vectors.

CVE-2014-3490
Published: 2014-08-19
RESTEasy 2.3.1 before 2.3.8.SP2 and 3.x before 3.0.9, as used in Red Hat JBoss Enterprise Application Platform (EAP) 6.3.0, does not disable external entities when the resteasy.document.expand.entity.references parameter is set to false, which allows remote attackers to read arbitrary files and have...

CVE-2014-3504
Published: 2014-08-19
The (1) serf_ssl_cert_issuer, (2) serf_ssl_cert_subject, and (3) serf_ssl_cert_certificate functions in Serf 0.2.0 through 1.3.x before 1.3.7 does not properly handle a NUL byte in a domain name in the subject's Common Name (CN) field of an X.509 certificate, which allows man-in-the-middle attackers...

Best of the Web
Dark Reading Radio
Archived Dark Reading Radio
Dark Reading continuing coverage of the Black Hat 2014 conference brings interviews and commentary to Dark Reading listeners.