News
5/29/2012
12:59 PM
George Crump
George Crump
Commentary
Connect Directly
RSS
E-Mail
50%
50%

SSD Tiering: Why Caching Won't Die

Solid state storage is fast, but speed alone doesn't solve data management challenges.

Won't everything be simpler when we can afford to make all the storage in the data center solid state? Everything will be fast and we won't have to worry about moving data between different tiers of storage.

That sounds good--but it isn't exactly accurate. The solid state data center will still have tiers and something will need to manage it.

Solid state tiering will be drawn across two lines: performance and durability. Similar to hard disk tiering and caching, part of the reason to have multiple tiers of solid state disk (SSD) storage is that different types of SSD have different performance capabilities. For example RAM still sets the performance bar, outperforming flash on both reads and (especially) writes. Even flash has different performance characteristics, with the most dramatic example being enterprise multi-level cell (eMLC) that, in an attempt to improve durability, writes data significantly slower than the more expensive single-level cell (SLC).

The other line that will be used to separate the solid state tiers is the durability of each type of solid state storage. DRAM is the most durable, but is also volatile and can't survive power loss. Each of the types of flash solid state offer different levels of durability, with SLC having the longest life expectancy, and triple level cell (TLC) being least durable (as well as the least expensive). As we discuss in our recent article "Optimizing MLC SSDs For The Enterprise," solid state storage vendors are doing incredible work with flash controller technology to make sure they increase the life expectancy of all types of flash-based solid state storage. Even so, media longevity will continue to be a key differentiator between flash types.

For now, caching and tiering will remain the answer. First, the time it will take to get to a completely solid state data center is a long way off, if we ever actually get there. In all likelihood, disk and tape will play a role in most data centers for the majority of our IT careers. Second, even as the data center becomes predominantly solid state, we will want to move data between the types of solid state for reasons described above.

Another wrinkle that solid state brings is the location of the tier. It may not make sense to keep very active data in a storage system. It may be better for new, very active data to be kept in storage RAM or SLC flash installed in the server, prior to being moved to an SSD cache in the network or storage system. Then data that is active, but more reference in nature, could be moved to eMLC, MLC, or even TLC as appropriate. The final tier may be a move to hard disk or tape.

Given the data growth expected over the next several years, combined with the continual shortage of IT personnel, asking an already overworked storage administrator to manage these conditions may be too much. Instead, software will need to be intelligent enough to move data to different tiers of SSD both in the storage system and in the server.

As we analyzed many of the announcements from EMC World last week, we said automated tiering or caching is an important weapon in a vendor's arsenal as it begins to use a variety of solid state technologies to solve performance problems. Caching/tiering can no longer be looked at as a gateway to the solid state data center. The reality is that we will need to cache and tier in the solid state data center just like we need to in the mixed data center today.

Big data places heavy demands on storage infrastructure. In the new, all-digital Big Storage issue of InformationWeek Government, find out how federal agencies must adapt their architectures and policies to optimize it all. Also, we explain why tape storage continues to survive and thrive.

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
CalistaHerdhart
50%
50%
CalistaHerdhart,
User Rank: Apprentice
5/31/2012 | 5:48:17 AM
re: SSD Tiering: Why Caching Won't Die
> Instead, software will need to be intelligent enough to move data to different tiers of SSD both in the storage system and in the server.

George -- end user here...

Based on offerings out there -- automated tiering algorithms doesn't seem to be very smart or they come with a bunch of caveats. I still can't find anyone recommending (off the record) automated tiering for apps in production without any restrictions.

Maybe I am having problems decoding the vendor marketing speak on the last part, but I can't figure out how tiering is cost effective versus using flash as a cache...

Would love hear if this is being really tackled and where would it fit versus Flash being used as a cache...

Until then I am steering clear of storage tiering for my next deployment.
Register for Dark Reading Newsletters
White Papers
Flash Poll
Current Issue
Cartoon
DevOpsí Impact on Application Security
DevOpsí Impact on Application Security
Managing the interdependency between software and infrastructure is a thorny challenge. Often, itís a ďdevelopers are from Mars, systems engineers are from VenusĒ situation.
Video
Slideshows
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2014-1544
Published: 2014-07-23
Use-after-free vulnerability in the CERT_DestroyCertificate function in libnss3.so in Mozilla Network Security Services (NSS) 3.x, as used in Firefox before 31.0, Firefox ESR 24.x before 24.7, and Thunderbird before 24.7, allows remote attackers to execute arbitrary code via vectors that trigger cer...

CVE-2014-1547
Published: 2014-07-23
Multiple unspecified vulnerabilities in the browser engine in Mozilla Firefox before 31.0, Firefox ESR 24.x before 24.7, and Thunderbird before 24.7 allow remote attackers to cause a denial of service (memory corruption and application crash) or possibly execute arbitrary code via unknown vectors.

CVE-2014-1548
Published: 2014-07-23
Multiple unspecified vulnerabilities in the browser engine in Mozilla Firefox before 31.0 and Thunderbird before 31.0 allow remote attackers to cause a denial of service (memory corruption and application crash) or possibly execute arbitrary code via unknown vectors.

CVE-2014-1549
Published: 2014-07-23
The mozilla::dom::AudioBufferSourceNodeEngine::CopyFromInputBuffer function in Mozilla Firefox before 31.0 and Thunderbird before 31.0 does not properly allocate Web Audio buffer memory, which allows remote attackers to execute arbitrary code or cause a denial of service (buffer overflow and applica...

CVE-2014-1550
Published: 2014-07-23
Use-after-free vulnerability in the MediaInputPort class in Mozilla Firefox before 31.0 and Thunderbird before 31.0 allows remote attackers to execute arbitrary code or cause a denial of service (heap memory corruption) by leveraging incorrect Web Audio control-message ordering.

Best of the Web
Dark Reading Radio
Archived Dark Reading Radio
Sara Peters hosts a conversation on Botnets and those who fight them.