03:43 PM
George Crump
George Crump

Understanding Hard Drive Performance

In the last performance entries we discussed understanding storage bandwidth and understanding storage controllers. Next up is to understand the performance characteristics of the hard drive itself and how the mechanical hard drive can be the performance bottleneck.

In the last performance entries we discussed understanding storage bandwidth and understanding storage controllers. Next up is to understand the performance characteristics of the hard drive itself and how the mechanical hard drive can be the performance bottleneck.The big problem with hard drives is that they are stuck from a performance perspective. While everything else has increased in speed over the past few years, the fastest hard drives have been locked in at 15K RPM. What has kept mechanical drives viable in performance concerned data centers is the ability to group multiple drives together in an array group. Each drive in the array can respond to storage I/O requests. As long as you have enough requests, also known as queue depth, every time you add a drive to an array group performance improves.

The downside to large drive count array groups is that you often run into wasted capacity because so many drives are added to hit the performance requirement that the application can't use all the capacity that came along with the drive. Companies like 3PAR, Xiotech and Isilon get around this by performing fine grain virtualization, meaning they can stripe all the data from all the attached servers across all or most of the available drives in the storage system. This technique, known as wide striping, for many organizations strikes a balance between high performance and efficient capacity utilization.

At some point however you have a situation where you have added so many drives that it is not cost effective to do so or you run out of storage I/O requests and you have a response time or latency issue. The only step left is to speed up the drive itself.

Where are 20K RPM drives? While 20K RPM hard drives have been researched by various vendors, they have not been found to be viable for the market. There are problems with the heat, vibration and reliability issues with these faster drives. Cost is also an issue with 20K RPM hard drives. In fact they may be more expensive than an equivalent capacity Solid State Disk (SSD). For reasons we state in "SSD's are cost effective NOW", the downward price trend on SSDs seems to have eliminated the development of the 20K RPM drive market. With SSD you get better performance with none of the heat and vibration issues that were set to plague 20K drives. Finally SSDs, especially enterprise class SSD using SLC Flash, have proven to be as reliable as mechanical drives.

For many environments drive performance is not an issue and a simple array group of 10K or even 7.5K RPM drives provides all the performance their applications need. As performance becomes an issue, and storage controller and bandwidth issues are eliminated, then adding drive count is the next logical step, especially if you can leverage technologies like fine grain virtualization. Eventually the final stop is SSD.

Even in SSD their is likely to be tiers of service and you will want to use different SSD technologies for different SSD classes.

Track us on Twitter:

Subscribe to our RSS feed.

George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.

Comment  | 
Print  | 
More Insights
Register for Dark Reading Newsletters
White Papers
Current Issue
E-Commerce Security: What Every Enterprise Needs to Know
The mainstream use of EMV smartcards in the US has experts predicting an increase in online fraud. Organizations will need to look at new tools and processes for building better breach detection and response capabilities.
Flash Poll
10 Recommendations for Outsourcing Security
10 Recommendations for Outsourcing Security
Enterprises today have a wide range of third-party options to help improve their defenses, including MSSPs, auditing and penetration testing, and DDoS protection. But are there situations in which a service provider might actually increase risk?
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
Published: 2015-10-15
The Direct Rendering Manager (DRM) subsystem in the Linux kernel through 4.x mishandles requests for Graphics Execution Manager (GEM) objects, which allows context-dependent attackers to cause a denial of service (memory consumption) via an application that processes graphics data, as demonstrated b...

Published: 2015-10-15
netstat in IBM AIX 5.3, 6.1, and 7.1 and VIOS 2.2.x, when a fibre channel adapter is used, allows local users to gain privileges via unspecified vectors.

Published: 2015-10-15
Cross-site request forgery (CSRF) vulnerability in eXtplorer before 2.1.8 allows remote attackers to hijack the authentication of arbitrary users for requests that execute PHP code.

Published: 2015-10-15
Directory traversal vulnerability in QNAP QTS before 4.1.4 build 0910 and 4.2.x before 4.2.0 RC2 build 0910, when AFP is enabled, allows remote attackers to read or write to arbitrary files by leveraging access to an OS X (1) user or (2) guest account.

Published: 2015-10-15
Cisco Application Policy Infrastructure Controller (APIC) 1.1j allows local users to gain privileges via vectors involving addition of an SSH key, aka Bug ID CSCuw46076.

Dark Reading Radio