Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.


10:32 AM
George Crump
George Crump

SSD Domination, Sooner Than You Think

Based on the recent news that Intel has announced an 80-GB Solid State Disk for less than $600, the end for the mechanical drive may get here within the next five years.

Based on the recent news that Intel has announced an 80-GB Solid State Disk for less than $600, the end for the mechanical drive may get here within the next five years.Think about it, a 15k 300-GB Fibre drive costs a little more than $400. While the 80-GB SSD from Intel isn't what I would call enterprise class, and certainly there is a delta in capacity, it is more than reasonable to think that the gap between solid state and mechanical drives will close relatively fast. Also in Tier One storage, the gap doesn't need to close completely -- if you subscribe to the conventional wisdom that 80% of your data is inactive, you only need enough SSD capacity to hold that 20% of most active data. When viewed from a watt-per-performance perspective, SSD also is greener. With mechanical drives you many times have to buy extra drive count to get the performance that you need. This isn't the case with SSD. There is some maturing that needs to happen. SSD technology is 30X or more faster than the current state of the art in mechanical drive performance. This means that the current storage drive shelves and controllers need to be optimized, if not totally redesigned, for this zero latency environment. The current practice of storage manufacturers to plug SSD modules into their existing drive shelves is a short-term workaround to get SSD to the masses. Eventually these manufacturers will need to follow the model of companies like Texas Memory Systems, Solid Data, and Violin Memory that have built systems from the ground up to be optimized for the zero-latency environment. Secondly, there needs to be intelligence at the storage controller level to move data in and out of the SSD area. Right now, SSD is expensive enough that most customers know exactly what files or components of a database they want to put on SSD. As SSD becomes less expensive and its capacity larger, its use will broaden and then the need to automate data in and out of the SSD Tier will become more critical. This will continue at least until SSD becomes so inexpensive that all your Tier One storage is SSD in some form. Even when we get to that point, maybe in the next five years, there will need to be some intelligence to move this data to Tier 3 Archive storage. This move will likely not be controller driven and will be done either by a global file system or a specific but simple software data mover. From a time line perspective, I would expect SSD to continue to be application or even file specific for the next 18 months, although the number of applications that utilize it will grow. I don't expect to see wild growth, as some research firms have predicted. Then in the next two to four years I would expect to see a broader application of SSD across ever-growing chunks of Tier One storage with some sort of automated data movement in and out of the SSD areas. Then, finally, within the next five years I would expect most data centers to begin to move toward a two-tier strategy that are polar opposites of each other, SSD and Archive, with nothing in between. Finally, don't think that once we get everything in Tier One over to SSD your performance problems will be solved. Initially, there will be a lot of time spent addressing latency issues that SSD exposes. For example, who thought we would be complaining that drive shelves aren't fast enough? Once the SSD-exposed latency issues are resolved, there will be complaints that SSD itself is not fast enough, and then we will have a whole new tiering system for SSD drives.

Track us on Twitter: http://twitter.com/storageswiss.

Subscribe to our RSS feed.

George Crump is founder of Storage Switzerland, an analyst firm focused on the virtualization and storage marketplaces. It provides strategic consulting and analysis to storage users, suppliers, and integrators. An industry veteran of more than 25 years, Crump has held engineering and sales positions at various IT industry manufacturers and integrators. Prior to Storage Switzerland, he was CTO at one of the nation's largest integrators.

Comment  | 
Print  | 
More Insights
Newest First  |  Oldest First  |  Threaded View
COVID-19: Latest Security News & Commentary
Dark Reading Staff 5/28/2020
Stay-at-Home Orders Coincide With Massive DNS Surge
Robert Lemos, Contributing Writer,  5/27/2020
Register for Dark Reading Newsletters
White Papers
Cartoon Contest
Write a Caption, Win a Starbucks Card! Click Here
Latest Comment: Can you smell me now?
Current Issue
How Cybersecurity Incident Response Programs Work (and Why Some Don't)
This Tech Digest takes a look at the vital role cybersecurity incident response (IR) plays in managing cyber-risk within organizations. Download the Tech Digest today to find out how well-planned IR programs can detect intrusions, contain breaches, and help an organization restore normal operations.
Flash Poll
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
PUBLISHED: 2020-05-29
There is an Incorrect Authorization vulnerability in Micro Focus Service Management Automation (SMA) product affecting version 2018.05 to 2020.02. The vulnerability could be exploited to provide unauthorized access to the Container Deployment Foundation.
PUBLISHED: 2020-05-29
A Denial of Service vulnerability in MuleSoft Mule CE/EE 3.8.x, 3.9.x, and 4.x released before April 7, 2020, could allow remote attackers to submit data which can lead to resource exhaustion.
PUBLISHED: 2020-05-29
All versions of snyk-broker before 4.72.2 are vulnerable to Arbitrary File Read. It allows arbitrary file reads for users who have access to Snyk's internal network by appending the URL with a fragment identifier and a whitelisted path e.g. `#package.json`
PUBLISHED: 2020-05-29
All versions of snyk-broker after 4.72.0 including and before 4.73.1 are vulnerable to Arbitrary File Read. It allows arbitrary file reads to users with access to Snyk's internal network of any files ending in the following extensions: yaml, yml or json.
PUBLISHED: 2020-05-29
All versions of snyk-broker before 4.73.1 are vulnerable to Information Exposure. It logs private keys if logging level is set to DEBUG.