News

8/4/2008
04:48 PM
George Crump
George Crump
Commentary
50%
50%

Enterprise Solid State Disk - Where Are We?

It seems like everyone is jumping into SSD (Solid State Disk) today. EMC, Sun, Hitachi, HP, and others have all made announcements about adopting SSD. As I discussed in an earlier entry, the numbers certainly make good conversation pieces, but where are we in terms of ma

It seems like everyone is jumping into SSD (Solid State Disk) today. EMC, Sun, Hitachi, HP, and others have all made announcements about adopting SSD. As I discussed in an earlier entry, the numbers certainly make good conversation pieces, but where are we in terms of market adoption?IDC recently wrote a report stating 70% compounded annual growth rate over the next five years. That makes for a very hot market. Or are we somewhere else with market adoption? Growing, but at a much slower pace. If adoption is slower, it will tend to benefit the legacy SSD companies like Solid Data and Texas Memory Systems, not the larger system vendors. These companies are all pretty lean and mean; any growth in the market is good for them.

To determine market growth, you have to look at what people can or want to do with an enterprise SSD. As you can guess, performance is the key and often only issue when deciding if you should deploy a SSD. For the price delta of SSD vs. spinning disk, that performance gain has to be significant vs. what can be had with today's advanced disk technologies.

Also, the application has to actually be able to take advantage of the speed available to it. For now, that comes down to a very finite number of applications which are directly responsible for revenue production within an organization. Almost always the entire application can't take advantage of the performance offering of SSD, only a few very active hot files. These hot files end up on SSD's. In general, these files are manually moved to the SSD by a savvy system or database administrator. It's not very fancy nor very automated, but it works and delivers often very impressive results. improving overall application performance while at the same time reducing the number of servers required by the application. These workloads tend to be small enough that they typically are being placed on DRAM-based, not Flashed SSD's. Why? From an enterprise perspective, they are by far the most commonly deployed type of SSD, Flash-based systems just came into full light over the last 12 months. There are some technical reasons as well; although being more expensive, DRAM delivers consistent random I/O performance across both reads and writes. Flash-based systems, on the other hand, offer solid performance in read operations but take a significant hit on write performance, compared with DRAM, although they're still faster than hard disk writes.

Despite this, looking forward into the next year, Flash SSD's, because of pricing advantage, will provide most of the unit growth in the SSD space and as long as they are used in the correct, read heavy workload, will still make a significant improvement to performance. The comparison between the two types of SSD technologies are important because most of the storage system suppliers today are using Flash-based technology, and they don't have the option of offering DRAM when it makes sense to. This will require that these legacy storage solution providers gain expertise in this game-changing performance solution and that they advise the customer correctly when they might be better off with a DRAM-based solution vs. a Flash-based solution. Improving the performance of key applications will still be the primary motivator, and is an area where the SSD specialists will have a distinct advantage.

Deploying SSD will require expertise not only from the SSD supplier but also from the customer. Each solution will have to be examined in a linear fashion to make sure the right type and quantity of that technology is applied. While the price can now be much more easily justified against the performance requirements, this analog deployment style will make a market growth more modest. I don't think we will see the 70% CAGR until we see better integration at the storage system level make this a more automated process. Until SSD-based systems are so large and so inexpensive that the price difference is negligible, we will need storage solutions that will automatically move hot blocks of data in and out of the SSD.

We will detail that in an upcoming entry.

Track us on Twitter: http://twitter.com/storageswiss.

Subscribe to our RSS feed.

George Crump is founder of Storage Switzerland, an analyst firm focused on the virtualization and storage marketplaces. It provides strategic consulting and analysis to storage users, suppliers, and integrators. An industry veteran of more than 25 years, Crump has held engineering and sales positions at various IT industry manufacturers and integrators. Prior to Storage Switzerland, he was CTO at one of the nation's largest integrators.

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
'Hidden Tunnels' Help Hackers Launch Financial Services Attacks
Kelly Sheridan, Staff Editor, Dark Reading,  6/20/2018
Inside a SamSam Ransomware Attack
Ajit Sancheti, CEO and Co-Founder, Preempt,  6/20/2018
Tesla Employee Steals, Sabotages Company Data
Jai Vijayan, Freelance writer,  6/19/2018
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
Flash Poll
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2018-12697
PUBLISHED: 2018-06-23
A NULL pointer dereference (aka SEGV on unknown address 0x000000000000) was discovered in work_stuff_copy_to_from in cplus-dem.c in GNU libiberty, as distributed in GNU Binutils 2.30. This can occur during execution of objdump.
CVE-2018-12698
PUBLISHED: 2018-06-23
demangle_template in cplus-dem.c in GNU libiberty, as distributed in GNU Binutils 2.30, allows attackers to trigger excessive memory consumption (aka OOM) during the "Create an array for saving the template argument values" XNEWVEC call. This can occur during execution of objdump.
CVE-2018-12699
PUBLISHED: 2018-06-23
finish_stab in stabs.c in GNU Binutils 2.30 allows attackers to cause a denial of service (heap-based buffer overflow) or possibly have unspecified other impact, as demonstrated by an out-of-bounds write of 8 bytes. This can occur during execution of objdump.
CVE-2018-12700
PUBLISHED: 2018-06-23
A Stack Exhaustion issue was discovered in debug_write_type in debug.c in GNU Binutils 2.30 because of DEBUG_KIND_INDIRECT infinite recursion.
CVE-2018-11560
PUBLISHED: 2018-06-23
The webService binary on Insteon HD IP Camera White 2864-222 devices has a stack-based Buffer Overflow leading to Control-Flow Hijacking via a crafted usr key, as demonstrated by a long remoteIp parameter to cgi-bin/CGIProxy.fcgi on port 34100.