News
10/21/2009
03:43 PM
George Crump
George Crump
Commentary
50%
50%

Understanding Hard Drive Performance

In the last performance entries we discussed understanding storage bandwidth and understanding storage controllers. Next up is to understand the performance characteristics of the hard drive itself and how the mechanical hard drive can be the performance bottleneck.

In the last performance entries we discussed understanding storage bandwidth and understanding storage controllers. Next up is to understand the performance characteristics of the hard drive itself and how the mechanical hard drive can be the performance bottleneck.The big problem with hard drives is that they are stuck from a performance perspective. While everything else has increased in speed over the past few years, the fastest hard drives have been locked in at 15K RPM. What has kept mechanical drives viable in performance concerned data centers is the ability to group multiple drives together in an array group. Each drive in the array can respond to storage I/O requests. As long as you have enough requests, also known as queue depth, every time you add a drive to an array group performance improves.

The downside to large drive count array groups is that you often run into wasted capacity because so many drives are added to hit the performance requirement that the application can't use all the capacity that came along with the drive. Companies like 3PAR, Xiotech and Isilon get around this by performing fine grain virtualization, meaning they can stripe all the data from all the attached servers across all or most of the available drives in the storage system. This technique, known as wide striping, for many organizations strikes a balance between high performance and efficient capacity utilization.

At some point however you have a situation where you have added so many drives that it is not cost effective to do so or you run out of storage I/O requests and you have a response time or latency issue. The only step left is to speed up the drive itself.

Where are 20K RPM drives? While 20K RPM hard drives have been researched by various vendors, they have not been found to be viable for the market. There are problems with the heat, vibration and reliability issues with these faster drives. Cost is also an issue with 20K RPM hard drives. In fact they may be more expensive than an equivalent capacity Solid State Disk (SSD). For reasons we state in "SSD's are cost effective NOW", the downward price trend on SSDs seems to have eliminated the development of the 20K RPM drive market. With SSD you get better performance with none of the heat and vibration issues that were set to plague 20K drives. Finally SSDs, especially enterprise class SSD using SLC Flash, have proven to be as reliable as mechanical drives.

For many environments drive performance is not an issue and a simple array group of 10K or even 7.5K RPM drives provides all the performance their applications need. As performance becomes an issue, and storage controller and bandwidth issues are eliminated, then adding drive count is the next logical step, especially if you can leverage technologies like fine grain virtualization. Eventually the final stop is SSD.

Even in SSD their is likely to be tiers of service and you will want to use different SSD technologies for different SSD classes.

Track us on Twitter: http://twitter.com/storageswiss

Subscribe to our RSS feed.

George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.

Comment  | 
Print  | 
More Insights
Register for Dark Reading Newsletters
White Papers
Cartoon
Current Issue
Dark Reading Tech Digest, Dec. 19, 2014
Software-defined networking can be a net plus for security. The key: Work with the network team to implement gradually, test as you go, and take the opportunity to overhaul your security strategy.
Flash Poll
10 Recommendations for Outsourcing Security
10 Recommendations for Outsourcing Security
Enterprises today have a wide range of third-party options to help improve their defenses, including MSSPs, auditing and penetration testing, and DDoS protection. But are there situations in which a service provider might actually increase risk?
Video
Slideshows
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2010-5075
Published: 2014-12-27
Integer overflow in aswFW.sys 5.0.594.0 in Avast! Internet Security 5.0 Korean Trial allows local users to cause a denial of service (memory corruption and panic) via a crafted IOCTL_ASWFW_COMM_PIDINFO_RESULTS DeviceIoControl request to \\.\aswFW.

CVE-2011-4720
Published: 2014-12-27
Hillstone HS TFTP Server 1.3.2 allows remote attackers to cause a denial of service (daemon crash) via a long filename in a (1) RRQ or (2) WRQ operation.

CVE-2011-4722
Published: 2014-12-27
Directory traversal vulnerability in the TFTP Server 1.0.0.24 in Ipswitch WhatsUp Gold allows remote attackers to read arbitrary files via a .. (dot dot) in the Filename field of an RRQ operation.

CVE-2012-1203
Published: 2014-12-27
Cross-site request forgery (CSRF) vulnerability in starnet/index.php in SyndeoCMS 3.0 and earlier allows remote attackers to hijack the authentication of administrators for requests that add user accounts via a save_user action.

CVE-2012-1302
Published: 2014-12-27
Multiple cross-site scripting (XSS) vulnerabilities in amMap 2.6.3 allow remote attackers to inject arbitrary web script or HTML via the (1) data_file or (2) settings_file parameter to ammap.swf, or (3) the data_file parameter to amtimeline.swf.

Best of the Web
Dark Reading Radio
Archived Dark Reading Radio
Join us Wednesday, Dec. 17 at 1 p.m. Eastern Time to hear what employers are really looking for in a chief information security officer -- it may not be what you think.