News
5/29/2012
12:59 PM
George Crump
George Crump
Commentary
50%
50%

SSD Tiering: Why Caching Won't Die

Solid state storage is fast, but speed alone doesn't solve data management challenges.

Won't everything be simpler when we can afford to make all the storage in the data center solid state? Everything will be fast and we won't have to worry about moving data between different tiers of storage.

That sounds good--but it isn't exactly accurate. The solid state data center will still have tiers and something will need to manage it.

Solid state tiering will be drawn across two lines: performance and durability. Similar to hard disk tiering and caching, part of the reason to have multiple tiers of solid state disk (SSD) storage is that different types of SSD have different performance capabilities. For example RAM still sets the performance bar, outperforming flash on both reads and (especially) writes. Even flash has different performance characteristics, with the most dramatic example being enterprise multi-level cell (eMLC) that, in an attempt to improve durability, writes data significantly slower than the more expensive single-level cell (SLC).

The other line that will be used to separate the solid state tiers is the durability of each type of solid state storage. DRAM is the most durable, but is also volatile and can't survive power loss. Each of the types of flash solid state offer different levels of durability, with SLC having the longest life expectancy, and triple level cell (TLC) being least durable (as well as the least expensive). As we discuss in our recent article "Optimizing MLC SSDs For The Enterprise," solid state storage vendors are doing incredible work with flash controller technology to make sure they increase the life expectancy of all types of flash-based solid state storage. Even so, media longevity will continue to be a key differentiator between flash types.

For now, caching and tiering will remain the answer. First, the time it will take to get to a completely solid state data center is a long way off, if we ever actually get there. In all likelihood, disk and tape will play a role in most data centers for the majority of our IT careers. Second, even as the data center becomes predominantly solid state, we will want to move data between the types of solid state for reasons described above.

Another wrinkle that solid state brings is the location of the tier. It may not make sense to keep very active data in a storage system. It may be better for new, very active data to be kept in storage RAM or SLC flash installed in the server, prior to being moved to an SSD cache in the network or storage system. Then data that is active, but more reference in nature, could be moved to eMLC, MLC, or even TLC as appropriate. The final tier may be a move to hard disk or tape.

Given the data growth expected over the next several years, combined with the continual shortage of IT personnel, asking an already overworked storage administrator to manage these conditions may be too much. Instead, software will need to be intelligent enough to move data to different tiers of SSD both in the storage system and in the server.

As we analyzed many of the announcements from EMC World last week, we said automated tiering or caching is an important weapon in a vendor's arsenal as it begins to use a variety of solid state technologies to solve performance problems. Caching/tiering can no longer be looked at as a gateway to the solid state data center. The reality is that we will need to cache and tier in the solid state data center just like we need to in the mixed data center today.

Big data places heavy demands on storage infrastructure. In the new, all-digital Big Storage issue of InformationWeek Government, find out how federal agencies must adapt their architectures and policies to optimize it all. Also, we explain why tape storage continues to survive and thrive.

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
CalistaHerdhart
50%
50%
CalistaHerdhart,
User Rank: Apprentice
5/31/2012 | 5:48:17 AM
re: SSD Tiering: Why Caching Won't Die
> Instead, software will need to be intelligent enough to move data to different tiers of SSD both in the storage system and in the server.

George -- end user here...

Based on offerings out there -- automated tiering algorithms doesn't seem to be very smart or they come with a bunch of caveats. I still can't find anyone recommending (off the record) automated tiering for apps in production without any restrictions.

Maybe I am having problems decoding the vendor marketing speak on the last part, but I can't figure out how tiering is cost effective versus using flash as a cache...

Would love hear if this is being really tackled and where would it fit versus Flash being used as a cache...

Until then I am steering clear of storage tiering for my next deployment.
Register for Dark Reading Newsletters
White Papers
Cartoon
Current Issue
Dark Reading Tech Digest, Dec. 19, 2014
Software-defined networking can be a net plus for security. The key: Work with the network team to implement gradually, test as you go, and take the opportunity to overhaul your security strategy.
Flash Poll
10 Recommendations for Outsourcing Security
10 Recommendations for Outsourcing Security
Enterprises today have a wide range of third-party options to help improve their defenses, including MSSPs, auditing and penetration testing, and DDoS protection. But are there situations in which a service provider might actually increase risk?
Video
Slideshows
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2014-3971
Published: 2014-12-25
The CmdAuthenticate::_authenticateX509 function in db/commands/authentication_commands.cpp in mongod in MongoDB 2.6.x before 2.6.2 allows remote attackers to cause a denial of service (daemon crash) by attempting authentication with an invalid X.509 client certificate.

CVE-2014-7193
Published: 2014-12-25
The Crumb plugin before 3.0.0 for Node.js does not properly restrict token access in situations where a hapi route handler has CORS enabled, which allows remote attackers to obtain sensitive information, and potentially obtain the ability to spoof requests to non-CORS routes, via a crafted web site ...

CVE-2004-2771
Published: 2014-12-24
The expand function in fio.c in Heirloom mailx 12.5 and earlier and BSD mailx 8.1.2 and earlier allows remote attackers to execute arbitrary commands via shell metacharacters in an email address.

CVE-2014-3569
Published: 2014-12-24
The ssl23_get_client_hello function in s23_srvr.c in OpenSSL 1.0.1j does not properly handle attempts to use unsupported protocols, which allows remote attackers to cause a denial of service (NULL pointer dereference and daemon crash) via an unexpected handshake, as demonstrated by an SSLv3 handshak...

CVE-2014-4322
Published: 2014-12-24
drivers/misc/qseecom.c in the QSEECOM driver for the Linux kernel 3.x, as used in Qualcomm Innovation Center (QuIC) Android contributions for MSM devices and other products, does not validate certain offset, length, and base values within an ioctl call, which allows attackers to gain privileges or c...

Best of the Web
Dark Reading Radio
Archived Dark Reading Radio
Join us Wednesday, Dec. 17 at 1 p.m. Eastern Time to hear what employers are really looking for in a chief information security officer -- it may not be what you think.