News
5/29/2012
12:59 PM
George Crump
George Crump
Commentary
Connect Directly
RSS
E-Mail
50%
50%

SSD Tiering: Why Caching Won't Die

Solid state storage is fast, but speed alone doesn't solve data management challenges.

Won't everything be simpler when we can afford to make all the storage in the data center solid state? Everything will be fast and we won't have to worry about moving data between different tiers of storage.

That sounds good--but it isn't exactly accurate. The solid state data center will still have tiers and something will need to manage it.

Solid state tiering will be drawn across two lines: performance and durability. Similar to hard disk tiering and caching, part of the reason to have multiple tiers of solid state disk (SSD) storage is that different types of SSD have different performance capabilities. For example RAM still sets the performance bar, outperforming flash on both reads and (especially) writes. Even flash has different performance characteristics, with the most dramatic example being enterprise multi-level cell (eMLC) that, in an attempt to improve durability, writes data significantly slower than the more expensive single-level cell (SLC).

The other line that will be used to separate the solid state tiers is the durability of each type of solid state storage. DRAM is the most durable, but is also volatile and can't survive power loss. Each of the types of flash solid state offer different levels of durability, with SLC having the longest life expectancy, and triple level cell (TLC) being least durable (as well as the least expensive). As we discuss in our recent article "Optimizing MLC SSDs For The Enterprise," solid state storage vendors are doing incredible work with flash controller technology to make sure they increase the life expectancy of all types of flash-based solid state storage. Even so, media longevity will continue to be a key differentiator between flash types.

For now, caching and tiering will remain the answer. First, the time it will take to get to a completely solid state data center is a long way off, if we ever actually get there. In all likelihood, disk and tape will play a role in most data centers for the majority of our IT careers. Second, even as the data center becomes predominantly solid state, we will want to move data between the types of solid state for reasons described above.

Another wrinkle that solid state brings is the location of the tier. It may not make sense to keep very active data in a storage system. It may be better for new, very active data to be kept in storage RAM or SLC flash installed in the server, prior to being moved to an SSD cache in the network or storage system. Then data that is active, but more reference in nature, could be moved to eMLC, MLC, or even TLC as appropriate. The final tier may be a move to hard disk or tape.

Given the data growth expected over the next several years, combined with the continual shortage of IT personnel, asking an already overworked storage administrator to manage these conditions may be too much. Instead, software will need to be intelligent enough to move data to different tiers of SSD both in the storage system and in the server.

As we analyzed many of the announcements from EMC World last week, we said automated tiering or caching is an important weapon in a vendor's arsenal as it begins to use a variety of solid state technologies to solve performance problems. Caching/tiering can no longer be looked at as a gateway to the solid state data center. The reality is that we will need to cache and tier in the solid state data center just like we need to in the mixed data center today.

Big data places heavy demands on storage infrastructure. In the new, all-digital Big Storage issue of InformationWeek Government, find out how federal agencies must adapt their architectures and policies to optimize it all. Also, we explain why tape storage continues to survive and thrive.

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
CalistaHerdhart
50%
50%
CalistaHerdhart,
User Rank: Apprentice
5/31/2012 | 5:48:17 AM
re: SSD Tiering: Why Caching Won't Die
> Instead, software will need to be intelligent enough to move data to different tiers of SSD both in the storage system and in the server.

George -- end user here...

Based on offerings out there -- automated tiering algorithms doesn't seem to be very smart or they come with a bunch of caveats. I still can't find anyone recommending (off the record) automated tiering for apps in production without any restrictions.

Maybe I am having problems decoding the vendor marketing speak on the last part, but I can't figure out how tiering is cost effective versus using flash as a cache...

Would love hear if this is being really tackled and where would it fit versus Flash being used as a cache...

Until then I am steering clear of storage tiering for my next deployment.
Register for Dark Reading Newsletters
White Papers
Flash Poll
Current Issue
Cartoon
DevOps’ Impact on Application Security
DevOps’ Impact on Application Security
Managing the interdependency between software and infrastructure is a thorny challenge. Often, it’s a “developers are from Mars, systems engineers are from Venus” situation.
Video
Slideshows
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2014-0607
Published: 2014-07-24
Unrestricted file upload vulnerability in Attachmate Verastream Process Designer (VPD) before R6 SP1 Hotfix 1 allows remote attackers to execute arbitrary code by uploading and launching an executable file.

CVE-2014-1419
Published: 2014-07-24
Race condition in the power policy functions in policy-funcs in acpi-support before 0.142 allows local users to gain privileges via unspecified vectors.

CVE-2014-2360
Published: 2014-07-24
OleumTech WIO DH2 Wireless Gateway and Sensor Wireless I/O Modules allow remote attackers to execute arbitrary code via packets that report a high battery voltage.

CVE-2014-2361
Published: 2014-07-24
OleumTech WIO DH2 Wireless Gateway and Sensor Wireless I/O Modules, when BreeZ is used, do not require authentication for reading the site security key, which allows physically proximate attackers to spoof communication by obtaining this key after use of direct hardware access or manual-setup mode.

CVE-2014-2362
Published: 2014-07-24
OleumTech WIO DH2 Wireless Gateway and Sensor Wireless I/O Modules rely exclusively on a time value for entropy in key generation, which makes it easier for remote attackers to defeat cryptographic protection mechanisms by predicting the time of project creation.

Best of the Web
Dark Reading Radio
Archived Dark Reading Radio
Sara Peters hosts a conversation on Botnets and those who fight them.