News
3/28/2011
03:11 PM
George Crump
George Crump
Commentary
Connect Directly
RSS
E-Mail
50%
50%

Collecting The SSD Garbage

Solid state storage (SSS) is the performance alternative to mechanical hard disk drives (HDD). Flash memory, thanks to its reduced cost compared to DRAM, has become the primary way the (SSS) is delivered. Suppliers of flash systems, especially in the enterprise, have to overcome two flash deficiencies that, as we discussed in our last entry, will cause unpredictable performance and reduce reliability.

Solid state storage (SSS) is the performance alternative to mechanical hard disk drives (HDD). Flash memory, thanks to its reduced cost compared to DRAM, has become the primary way the (SSS) is delivered. Suppliers of flash systems, especially in the enterprise, have to overcome two flash deficiencies that, as we discussed in our last entry, will cause unpredictable performance and reduce reliability. In this entry we'll focus on how vendors are providing predictable performance.As we discussed in our last entry when a Flash memory cell needs to be written to it needs to be erased first. This erase is done essentially by a "write" that reprograms the cell, then the cell can be written to. When flash storage is first used it does not contain any data so there are no erase steps involved. As a result performance out of the box is excellent for almost any flash device. Over time of course you use the drive reading, writing and deleting data as you go. As the device begins to run out of unused flash cell blocks to write new data to it begins to look for old cells that have been marked for deletion. Eventually, there are only cells with either active or old data, there are no totally unused cells. The drive can only stay fresh for so long From that point forward every time new data needs to be written to disk an old cell must be cleared out. As a result write performance will drop off dramatically.

To get around this problem most flash based storage systems have a process called garbage collection which pre-clears old memory areas so that on a write the only thing that the flash controller has to do is write the new data chunk. This process runs in the background on the storage device, driven by the flash's controllers. This should fix long term performance problems in most systems, since once the garbage collection process runs you should see predictable performance from your SSS. The problem is that under heavy write conditions that garbage collection process may not be able to keep ahead of the inbound writes.

When write activity is high, that also means there is a lot of data being marked for deletion and it means the garbage collection process has to work hard to find those cells eligible for pre-erase. Using flash SSS as a cache area is a good example of when this might happen. In a cache the data is constantly being turned over as different data sets are promoted into cache while others are demoted (which means erased from cache but they are not cleared via a garbage collection process). Regardless of the use case, when the garbage collection process can not keep up with the write activity, performance will suffer. The challenge facing the data center is that they can't predict with any accuracy when that might occur. Predictability of performance is as important in most data centers as high performance.

To solve this problem vendors have used a variety of techniques. First, many have under-allocated the amount of flash that the storage system can access. For example in a 200GB solid state device there might be 225GB of real memory but the storage system only sees 200GB. This makes sure that the garbage collection process and the flash controller always has extra free space to work with. Another step is to make sure that the individual flash controllers are powerful enough to drive the garbage collection process fast enough to stay ahead of write. Finally many flash vendors are using multiple flash controllers, each focused on a particular region of the flash storage system. In other words the process is being run on a smaller area by multiple controllers.

In the enterprise predictable performance is critical. Flash vendors get that and several systems are now showing flash performance, that even under the heaviest of write loads, will maintain nearly the same levels of performance throughout the peak. This becomes an important area to test and one that we will discuss in our webinar on Understanding SSD Specmenship.

Track us on Twitter: http://twitter.com/storageswiss

Subscribe to our RSS feed.

George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.

Comment  | 
Print  | 
More Insights
Register for Dark Reading Newsletters
White Papers
Flash Poll
Current Issue
Cartoon
DevOps’ Impact on Application Security
DevOps’ Impact on Application Security
Managing the interdependency between software and infrastructure is a thorny challenge. Often, it’s a “developers are from Mars, systems engineers are from Venus” situation.
Video
Slideshows
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2013-6117
Published: 2014-07-11
Dahua DVR 2.608.0000.0 and 2.608.GV00.0 allows remote attackers to bypass authentication and obtain sensitive information including user credentials, change user passwords, clear log files, and perform other actions via a request to TCP port 37777.

CVE-2014-0174
Published: 2014-07-11
Cumin (aka MRG Management Console), as used in Red Hat Enterprise MRG 2.5, does not include the HTTPOnly flag in a Set-Cookie header for the session cookie, which makes it easier for remote attackers to obtain potentially sensitive information via script access to this cookie.

CVE-2014-3485
Published: 2014-07-11
The REST API in the ovirt-engine in oVirt, as used in Red Hat Enterprise Virtualization (rhevm) 3.4, allows remote authenticated users to read arbitrary files and have other unspecified impact via unknown vectors, related to an XML External Entity (XXE) issue.

CVE-2014-3499
Published: 2014-07-11
Docker 1.0.0 uses world-readable and world-writable permissions on the management socket, which allows local users to gain privileges via unspecified vectors.

CVE-2014-3503
Published: 2014-07-11
Apache Syncope 1.1.x before 1.1.8 uses weak random values to generate passwords, which makes it easier for remote attackers to guess the password via a brute force attack.

Best of the Web
Dark Reading Radio
Archived Dark Reading Radio
Marilyn Cohodas and her guests look at the evolving nature of the relationship between CIO and CSO.