News
11/7/2008
08:52 AM
George Crump
George Crump
Commentary
50%
50%

SSD's Latency Impact

In our last entry we talked about latency and what it was. We also discussed how storage system manufacturers are trying to overcome latency and performance issues of mechanical drives by using techniques like making the drives faster by using higher RPM drives, array groups with a high drive count, short-stroking those drives, wide striping those drives, and increasing the number of application servers

In our last entry we talked about latency and what it was. We also discussed how storage system manufacturers are trying to overcome latency and performance issues of mechanical drives by using techniques like making the drives faster by using higher RPM drives, array groups with a high drive count, short-stroking those drives, wide striping those drives, and increasing the number of application servers for improved parallelism.All of these techniques cost money, are not very green, and in many cases are more expensive than simply using SSD. Not to mention that they don't typically come close to SSD performance. The result has been the existence of standalone, purpose-built SSD solutions like those from Texas Memory Systems, Solid Data Systems, and Violin Memory, or the manufacturer adding SSD in a "drive-like" manner to its current storage systems.

The speed of SSD technology, especially DRAM, changes the latency focus away from the actual storage medium, as it has now been optimized, and onto the storage system's infrastructure, which is suddenly a lot slower than the storage media. For vendors that incorporate SSD into existing drive enclosures, the performance of the shelf itself becomes a problem, the performance of the processors in the controllers becomes a problem, and an incorrectly sized cache (too big or too small) becomes a problem.

Another factor is that the software load on the controller becomes an issue. For the past several years, storage manufacturers have been piling on features to the storage controller like snapshots, replication, data deduplication, and others. All of these features take computing resources away from responding to storage I/O requests, which worsens system latency.

The result is that while the SSD technology going into the solution may be fast, simply adding SSD to your storage system may not dramatically improve performance like it should. Standalone, purpose-built SSD systems offer lower latency because these vendors have built systems from the chip up that are designed to deliver on the low latency of SSD.

In our next entry, we will examine those differences and how storage manufacturers will need to alter their delivery of SSD technology. Then we will wrap up with capacity management on SSDs. At the cost of SSD technology, the only good SSD is a FULL SSD.

Join us for our upcoming Webcast SSD: Flash vs. DRAM...and the winner is?

Track us on Twitter: http://twitter.com/storageswiss.

Subscribe to our RSS feed.

George Crump is founder of Storage Switzerland, an analyst firm focused on the virtualization and storage marketplaces. It provides strategic consulting and analysis to storage users, suppliers, and integrators. An industry veteran of more than 25 years, Crump has held engineering and sales positions at various IT industry manufacturers and integrators. Prior to Storage Switzerland, he was CTO at one of the nation's largest integrators.

Comment  | 
Print  | 
More Insights
Register for Dark Reading Newsletters
White Papers
Cartoon
Current Issue
Flash Poll
10 Recommendations for Outsourcing Security
10 Recommendations for Outsourcing Security
Enterprises today have a wide range of third-party options to help improve their defenses, including MSSPs, auditing and penetration testing, and DDoS protection. But are there situations in which a service provider might actually increase risk?
Video
Slideshows
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2015-4231
Published: 2015-07-03
The Python interpreter in Cisco NX-OS 6.2(8a) on Nexus 7000 devices allows local users to bypass intended access restrictions and delete an arbitrary VDC's files by leveraging administrative privileges in one VDC, aka Bug ID CSCur08416.

CVE-2015-4232
Published: 2015-07-03
Cisco NX-OS 6.2(10) on Nexus and MDS 9000 devices allows local users to execute arbitrary OS commands by entering crafted tar parameters in the CLI, aka Bug ID CSCus44856.

CVE-2015-4234
Published: 2015-07-03
Cisco NX-OS 6.0(2) and 6.2(2) on Nexus devices has an improper OS configuration, which allows local users to obtain root access via unspecified input to the Python interpreter, aka Bug IDs CSCun02887, CSCur00115, and CSCur00127.

CVE-2015-4237
Published: 2015-07-03
The CLI parser in Cisco NX-OS 4.1(2)E1(1), 6.2(11b), 6.2(12), 7.2(0)ZZ(99.1), 7.2(0)ZZ(99.3), and 9.1(1)SV1(3.1.8) on Nexus devices allows local users to execute arbitrary OS commands via crafted characters in a filename, aka Bug IDs CSCuv08491, CSCuv08443, CSCuv08480, CSCuv08448, CSCuu99291, CSCuv0...

CVE-2015-4239
Published: 2015-07-03
Cisco Adaptive Security Appliance (ASA) Software 9.3(2.243) and 100.13(0.21) allows remote attackers to cause a denial of service (device reload) by sending crafted OSPFv2 packets on the local network, aka Bug ID CSCus84220.

Dark Reading Radio
Archived Dark Reading Radio
Marc Spitler, co-author of the Verizon DBIR will share some of the lesser-known but most intriguing tidbits from the massive report