News
10/31/2008
09:01 AM
George Crump
George Crump
Commentary
Connect Directly
RSS
E-Mail
50%
50%

SSDs Are Not Confusing

Seems like every vendor I speak with is laying out its solid-state disk (SSD) strategy, and almost all say they're trying to help the customer through this confusing platform change. It's not confusing.

Seems like every vendor I speak with is laying out its solid-state disk (SSD) strategy, and almost all say they're trying to help the customer through this confusing platform change. It's not confusing.Today SSDs' primary function is to accelerate databases. There are specific files in a database that, under intense load, get hot. These files are ideal to put on SSD. The second practical use case is actually in a NAS environment where thousands of users are viewing the same set of data over and over again; examples might be product images for a catalog site or heavily hit videos on a video-sharing site. While there are other use cases, these are the big two, and really databases today represent the lion's share of the market.

In my experience, if you have one of these situations you know it. You are probably using or looking at using short stroking to address the performance issues, and at that point SSD begins to make sense, not only for performance but also for cost savings. Implementing SSD can reduce the number of physical spindles required, which saves power; it can reduce adding servers to add access bandwidth; and it can reduce the amount of time spent tuning the database that happens in these environments.

Why is this confusing? First, because the vendors getting into this space are new to it and they are trying to figure it out. In many cases some of their other solutions, like wide striping or enhanced caching techniques, may be suitable for many customers experiencing performance-related issues. As a result they are trying to figure out when to tell a customer to use wide striping or enhanced caching vs. when to use SSD. The other issue is that as most of the traditional storage manufacturers get into the space they are introducing a Flash-based SSD and in many cases are integrating it into their existing storage systems.

While Flash-based SSD is acceptable for read-heavy environments, there remains concern about reliability and the random write I/O performance, while still better than mechanical drives, may not be worth the added costs. Also by integrating Flash-based SSD into the standard storage shelves, these suppliers are exposing themselves to shelf bottleneck issues.

Shelf bottleneck issues occur because in many cases two or three flash drives can consume all the bandwidth that the current drive shelf is able to deliver, leaving 10-plus drive slots unused. Many vendors will quickly mention that these open drive slots could be used by SATA or Fiber mechanical drives, but then you may get in to speed-matching issues.

In the end, storage manufacturers will likely have to come out with specific shelves for just SSD. Probably a 1U case that only holds four SSD drives or a larger standalone system that's designed just for SSD, like those from Solid Data Systems, Texas Memory Systems, or Violin Memory. These manufacturers also provide DRAM-based SSD, which does not suffer from the write I/O or reliability concerns with Flash.

Flash is not all bad and there are some excellent uses for it, as we will discuss in our next entry.

Join us for our upcoming Webcast: Demystifying Primary Storage Data Reduction.

Track us on Twitter: http://twitter.com/storageswiss.

Subscribe to our RSS feed.

George Crump is founder of Storage Switzerland, an analyst firm focused on the virtualization and storage marketplaces. It provides strategic consulting and analysis to storage users, suppliers, and integrators. An industry veteran of more than 25 years, Crump has held engineering and sales positions at various IT industry manufacturers and integrators. Prior to Storage Switzerland, he was CTO at one of the nation's largest integrators.

Comment  | 
Print  | 
More Insights
Register for Dark Reading Newsletters
White Papers
Flash Poll
Current Issue
Cartoon
DevOps’ Impact on Application Security
DevOps’ Impact on Application Security
Managing the interdependency between software and infrastructure is a thorny challenge. Often, it’s a “developers are from Mars, systems engineers are from Venus” situation.
Video
Slideshows
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2013-6117
Published: 2014-07-11
Dahua DVR 2.608.0000.0 and 2.608.GV00.0 allows remote attackers to bypass authentication and obtain sensitive information including user credentials, change user passwords, clear log files, and perform other actions via a request to TCP port 37777.

CVE-2014-0174
Published: 2014-07-11
Cumin (aka MRG Management Console), as used in Red Hat Enterprise MRG 2.5, does not include the HTTPOnly flag in a Set-Cookie header for the session cookie, which makes it easier for remote attackers to obtain potentially sensitive information via script access to this cookie.

CVE-2014-3485
Published: 2014-07-11
The REST API in the ovirt-engine in oVirt, as used in Red Hat Enterprise Virtualization (rhevm) 3.4, allows remote authenticated users to read arbitrary files and have other unspecified impact via unknown vectors, related to an XML External Entity (XXE) issue.

CVE-2014-3499
Published: 2014-07-11
Docker 1.0.0 uses world-readable and world-writable permissions on the management socket, which allows local users to gain privileges via unspecified vectors.

CVE-2014-3503
Published: 2014-07-11
Apache Syncope 1.1.x before 1.1.8 uses weak random values to generate passwords, which makes it easier for remote attackers to guess the password via a brute force attack.

Best of the Web
Dark Reading Radio
Archived Dark Reading Radio
Marilyn Cohodas and her guests look at the evolving nature of the relationship between CIO and CSO.