Solid state storage (SSS) is the performance alternative to mechanical hard disk drives (HDD). Flash memory, thanks to its reduced cost compared to DRAM, has become the primary way the (SSS) is delivered. Suppliers of flash systems, especially in the enterprise, have to overcome two flash deficiencies that, as we discussed in our <a href="http://www.informationweek.com/blog/main/archives/2011/03/understanding_s_3.html">last entry</a>, will cause unpredictable performance and reduce reliability.

George Crump, President, Storage Switzerland

March 28, 2011

4 Min Read

Solid state storage (SSS) is the performance alternative to mechanical hard disk drives (HDD). Flash memory, thanks to its reduced cost compared to DRAM, has become the primary way the (SSS) is delivered. Suppliers of flash systems, especially in the enterprise, have to overcome two flash deficiencies that, as we discussed in our last entry, will cause unpredictable performance and reduce reliability. In this entry we'll focus on how vendors are providing predictable performance.As we discussed in our last entry when a Flash memory cell needs to be written to it needs to be erased first. This erase is done essentially by a "write" that reprograms the cell, then the cell can be written to. When flash storage is first used it does not contain any data so there are no erase steps involved. As a result performance out of the box is excellent for almost any flash device. Over time of course you use the drive reading, writing and deleting data as you go. As the device begins to run out of unused flash cell blocks to write new data to it begins to look for old cells that have been marked for deletion. Eventually, there are only cells with either active or old data, there are no totally unused cells. The drive can only stay fresh for so long From that point forward every time new data needs to be written to disk an old cell must be cleared out. As a result write performance will drop off dramatically.

To get around this problem most flash based storage systems have a process called garbage collection which pre-clears old memory areas so that on a write the only thing that the flash controller has to do is write the new data chunk. This process runs in the background on the storage device, driven by the flash's controllers. This should fix long term performance problems in most systems, since once the garbage collection process runs you should see predictable performance from your SSS. The problem is that under heavy write conditions that garbage collection process may not be able to keep ahead of the inbound writes.

When write activity is high, that also means there is a lot of data being marked for deletion and it means the garbage collection process has to work hard to find those cells eligible for pre-erase. Using flash SSS as a cache area is a good example of when this might happen. In a cache the data is constantly being turned over as different data sets are promoted into cache while others are demoted (which means erased from cache but they are not cleared via a garbage collection process). Regardless of the use case, when the garbage collection process can not keep up with the write activity, performance will suffer. The challenge facing the data center is that they can't predict with any accuracy when that might occur. Predictability of performance is as important in most data centers as high performance.

To solve this problem vendors have used a variety of techniques. First, many have under-allocated the amount of flash that the storage system can access. For example in a 200GB solid state device there might be 225GB of real memory but the storage system only sees 200GB. This makes sure that the garbage collection process and the flash controller always has extra free space to work with. Another step is to make sure that the individual flash controllers are powerful enough to drive the garbage collection process fast enough to stay ahead of write. Finally many flash vendors are using multiple flash controllers, each focused on a particular region of the flash storage system. In other words the process is being run on a smaller area by multiple controllers.

In the enterprise predictable performance is critical. Flash vendors get that and several systems are now showing flash performance, that even under the heaviest of write loads, will maintain nearly the same levels of performance throughout the peak. This becomes an important area to test and one that we will discuss in our webinar on Understanding SSD Specmenship.

Track us on Twitter: http://twitter.com/storageswiss

Subscribe to our RSS feed.

George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.

About the Author(s)

George Crump

President, Storage Switzerland

George Crump is president and founder of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. With 25 years of experience designing storage solutions for datacenters across the US, he has seen the birth of such technologies as RAID, NAS, and SAN. Prior to founding Storage Switzerland, he was CTO at one the nation’s largest storage integrators, where he was in charge of technology testing, integration, and product selection. George is responsible for the storage blog on InformationWeek's website and is a regular contributor to publications such as Byte and Switch, SearchStorage, eWeek, SearchServerVirtualizaiton, and SearchDataBackup.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights