Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

News

9/10/2008
10:32 AM
George Crump
George Crump
Commentary
50%
50%

SSD Domination, Sooner Than You Think

Based on the recent news that Intel has announced an 80-GB Solid State Disk for less than $600, the end for the mechanical drive may get here within the next five years.

Based on the recent news that Intel has announced an 80-GB Solid State Disk for less than $600, the end for the mechanical drive may get here within the next five years.Think about it, a 15k 300-GB Fibre drive costs a little more than $400. While the 80-GB SSD from Intel isn't what I would call enterprise class, and certainly there is a delta in capacity, it is more than reasonable to think that the gap between solid state and mechanical drives will close relatively fast. Also in Tier One storage, the gap doesn't need to close completely -- if you subscribe to the conventional wisdom that 80% of your data is inactive, you only need enough SSD capacity to hold that 20% of most active data. When viewed from a watt-per-performance perspective, SSD also is greener. With mechanical drives you many times have to buy extra drive count to get the performance that you need. This isn't the case with SSD. There is some maturing that needs to happen. SSD technology is 30X or more faster than the current state of the art in mechanical drive performance. This means that the current storage drive shelves and controllers need to be optimized, if not totally redesigned, for this zero latency environment. The current practice of storage manufacturers to plug SSD modules into their existing drive shelves is a short-term workaround to get SSD to the masses. Eventually these manufacturers will need to follow the model of companies like Texas Memory Systems, Solid Data, and Violin Memory that have built systems from the ground up to be optimized for the zero-latency environment. Secondly, there needs to be intelligence at the storage controller level to move data in and out of the SSD area. Right now, SSD is expensive enough that most customers know exactly what files or components of a database they want to put on SSD. As SSD becomes less expensive and its capacity larger, its use will broaden and then the need to automate data in and out of the SSD Tier will become more critical. This will continue at least until SSD becomes so inexpensive that all your Tier One storage is SSD in some form. Even when we get to that point, maybe in the next five years, there will need to be some intelligence to move this data to Tier 3 Archive storage. This move will likely not be controller driven and will be done either by a global file system or a specific but simple software data mover. From a time line perspective, I would expect SSD to continue to be application or even file specific for the next 18 months, although the number of applications that utilize it will grow. I don't expect to see wild growth, as some research firms have predicted. Then in the next two to four years I would expect to see a broader application of SSD across ever-growing chunks of Tier One storage with some sort of automated data movement in and out of the SSD areas. Then, finally, within the next five years I would expect most data centers to begin to move toward a two-tier strategy that are polar opposites of each other, SSD and Archive, with nothing in between. Finally, don't think that once we get everything in Tier One over to SSD your performance problems will be solved. Initially, there will be a lot of time spent addressing latency issues that SSD exposes. For example, who thought we would be complaining that drive shelves aren't fast enough? Once the SSD-exposed latency issues are resolved, there will be complaints that SSD itself is not fast enough, and then we will have a whole new tiering system for SSD drives.

Track us on Twitter: http://twitter.com/storageswiss.

Subscribe to our RSS feed.

George Crump is founder of Storage Switzerland, an analyst firm focused on the virtualization and storage marketplaces. It provides strategic consulting and analysis to storage users, suppliers, and integrators. An industry veteran of more than 25 years, Crump has held engineering and sales positions at various IT industry manufacturers and integrators. Prior to Storage Switzerland, he was CTO at one of the nation's largest integrators.

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Why Cyber-Risk Is a C-Suite Issue
Marc Wilczek, Digital Strategist & CIO Advisor,  11/12/2019
DevSecOps: The Answer to the Cloud Security Skills Gap
Lamont Orange, Chief Information Security Officer at Netskope,  11/15/2019
Attackers' Costs Increasing as Businesses Focus on Security
Robert Lemos, Contributing Writer,  11/15/2019
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Current Issue
Navigating the Deluge of Security Data
In this Tech Digest, Dark Reading shares the experiences of some top security practitioners as they navigate volumes of security data. We examine some examples of how enterprises can cull this data to find the clues they need.
Flash Poll
Rethinking Enterprise Data Defense
Rethinking Enterprise Data Defense
Frustrated with recurring intrusions and breaches, cybersecurity professionals are questioning some of the industrys conventional wisdom. Heres a look at what theyre thinking about.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2019-19040
PUBLISHED: 2019-11-17
KairosDB through 1.2.2 has XSS in view.html because of showErrorMessage in js/graph.js, as demonstrated by view.html?q= with a '"sampling":{"value":"<script>' substring.
CVE-2019-19041
PUBLISHED: 2019-11-17
An issue was discovered in Xorux Lpar2RRD 6.11 and Stor2RRD 2.61, as distributed in Xorux 2.41. They do not correctly verify the integrity of an upgrade package before processing it. As a result, official upgrade packages can be modified to inject an arbitrary Bash script that will be executed by th...
CVE-2019-19012
PUBLISHED: 2019-11-17
An integer overflow in the search_in_range function in regexec.c in Oniguruma 6.x before 6.9.4_rc2 leads to an out-of-bounds read, in which the offset of this read is under the control of an attacker. (This only affects the 32-bit compiled version). Remote attackers can cause a denial-of-service or ...
CVE-2019-19022
PUBLISHED: 2019-11-17
iTerm2 through 3.3.6 has potentially insufficient documentation about the presence of search history in com.googlecode.iterm2.plist, which might allow remote attackers to obtain sensitive information, as demonstrated by searching for the NoSyncSearchHistory string in .plist files within public Git r...
CVE-2019-19035
PUBLISHED: 2019-11-17
jhead 3.03 is affected by: heap-based buffer over-read. The impact is: Denial of service. The component is: ReadJpegSections and process_SOFn in jpgfile.c. The attack vector is: Open a specially crafted JPEG file.