News
5/29/2012
12:59 PM
George Crump
George Crump
Commentary
Connect Directly
RSS
E-Mail
50%
50%

SSD Tiering: Why Caching Won't Die

Solid state storage is fast, but speed alone doesn't solve data management challenges.

Won't everything be simpler when we can afford to make all the storage in the data center solid state? Everything will be fast and we won't have to worry about moving data between different tiers of storage.

That sounds good--but it isn't exactly accurate. The solid state data center will still have tiers and something will need to manage it.

Solid state tiering will be drawn across two lines: performance and durability. Similar to hard disk tiering and caching, part of the reason to have multiple tiers of solid state disk (SSD) storage is that different types of SSD have different performance capabilities. For example RAM still sets the performance bar, outperforming flash on both reads and (especially) writes. Even flash has different performance characteristics, with the most dramatic example being enterprise multi-level cell (eMLC) that, in an attempt to improve durability, writes data significantly slower than the more expensive single-level cell (SLC).

The other line that will be used to separate the solid state tiers is the durability of each type of solid state storage. DRAM is the most durable, but is also volatile and can't survive power loss. Each of the types of flash solid state offer different levels of durability, with SLC having the longest life expectancy, and triple level cell (TLC) being least durable (as well as the least expensive). As we discuss in our recent article "Optimizing MLC SSDs For The Enterprise," solid state storage vendors are doing incredible work with flash controller technology to make sure they increase the life expectancy of all types of flash-based solid state storage. Even so, media longevity will continue to be a key differentiator between flash types.

For now, caching and tiering will remain the answer. First, the time it will take to get to a completely solid state data center is a long way off, if we ever actually get there. In all likelihood, disk and tape will play a role in most data centers for the majority of our IT careers. Second, even as the data center becomes predominantly solid state, we will want to move data between the types of solid state for reasons described above.

Another wrinkle that solid state brings is the location of the tier. It may not make sense to keep very active data in a storage system. It may be better for new, very active data to be kept in storage RAM or SLC flash installed in the server, prior to being moved to an SSD cache in the network or storage system. Then data that is active, but more reference in nature, could be moved to eMLC, MLC, or even TLC as appropriate. The final tier may be a move to hard disk or tape.

Given the data growth expected over the next several years, combined with the continual shortage of IT personnel, asking an already overworked storage administrator to manage these conditions may be too much. Instead, software will need to be intelligent enough to move data to different tiers of SSD both in the storage system and in the server.

As we analyzed many of the announcements from EMC World last week, we said automated tiering or caching is an important weapon in a vendor's arsenal as it begins to use a variety of solid state technologies to solve performance problems. Caching/tiering can no longer be looked at as a gateway to the solid state data center. The reality is that we will need to cache and tier in the solid state data center just like we need to in the mixed data center today.

Big data places heavy demands on storage infrastructure. In the new, all-digital Big Storage issue of InformationWeek Government, find out how federal agencies must adapt their architectures and policies to optimize it all. Also, we explain why tape storage continues to survive and thrive.

Comment  | 
Print  | 
More Insights
Comments
Threaded  |  Newest First  |  Oldest First
CalistaHerdhart
50%
50%
CalistaHerdhart,
User Rank: Apprentice
5/31/2012 | 5:48:17 AM
re: SSD Tiering: Why Caching Won't Die
> Instead, software will need to be intelligent enough to move data to different tiers of SSD both in the storage system and in the server.

George -- end user here...

Based on offerings out there -- automated tiering algorithms doesn't seem to be very smart or they come with a bunch of caveats. I still can't find anyone recommending (off the record) automated tiering for apps in production without any restrictions.

Maybe I am having problems decoding the vendor marketing speak on the last part, but I can't figure out how tiering is cost effective versus using flash as a cache...

Would love hear if this is being really tackled and where would it fit versus Flash being used as a cache...

Until then I am steering clear of storage tiering for my next deployment.
Register for Dark Reading Newsletters
White Papers
Flash Poll
Current Issue
Cartoon
DevOps’ Impact on Application Security
DevOps’ Impact on Application Security
Managing the interdependency between software and infrastructure is a thorny challenge. Often, it’s a “developers are from Mars, systems engineers are from Venus” situation.
Video
Slideshows
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2014-0972
Published: 2014-08-01
The kgsl graphics driver for the Linux kernel 3.x, as used in Qualcomm Innovation Center (QuIC) Android contributions for MSM devices and other products, does not properly prevent write access to IOMMU context registers, which allows local users to select a custom page table, and consequently write ...

CVE-2014-2627
Published: 2014-08-01
Unspecified vulnerability in HP NonStop NetBatch G06.14 through G06.32.01, H06 through H06.28, and J06 through J06.17.01 allows remote authenticated users to gain privileges for NetBatch job execution via unknown vectors.

CVE-2014-3009
Published: 2014-08-01
The GDS component in IBM InfoSphere Master Data Management - Collaborative Edition 10.0 through 11.0 and InfoSphere Master Data Management Server for Product Information Management 9.0 and 9.1 does not properly handle FRAME elements, which makes it easier for remote authenticated users to conduct ph...

CVE-2014-3302
Published: 2014-08-01
user.php in Cisco WebEx Meetings Server 1.5(.1.131) and earlier does not properly implement the token timer for authenticated encryption, which allows remote attackers to obtain sensitive information via a crafted URL, aka Bug ID CSCuj81708.

CVE-2014-3534
Published: 2014-08-01
arch/s390/kernel/ptrace.c in the Linux kernel before 3.15.8 on the s390 platform does not properly restrict address-space control operations in PTRACE_POKEUSR_AREA requests, which allows local users to obtain read and write access to kernel memory locations, and consequently gain privileges, via a c...

Best of the Web
Dark Reading Radio