News
8/11/2008
04:46 PM
George Crump
George Crump
Commentary
Connect Directly
RSS
E-Mail
50%
50%

Tiered Storage Redefined

In the never-ending world of tiered storage, it really breaks down into two types of storage; transactional (active) and passive storage. For obvious reasons these two worlds overlap, but it is surprising how many levels of granularity there are within these tiers. Gone are the days of three tiers. There are more tiers of storage than ever, so it's helpful to see where we are.

In the never-ending world of tiered storage, it really breaks down into two types of storage; transactional (active) and passive storage. For obvious reasons these two worlds overlap, but it is surprising how many levels of granularity there are within these tiers. Gone are the days of three tiers. There are more tiers of storage than ever, so it's helpful to see where we are.What tiers are there? The industry, myself included, has put a whole number by each tier and everyone has their own take. Years ago, for me it used to be; tier one for primary storage, tier two for near-line storage, and tier three for tape. Things were simpler then, but it wasn't going to work. We needed something faster than tier one, something slower and more efficient than tier two, and something more accessible than tier three.

On the high-performance side you have storage controller cache. Yes, cache. Its big enough now that I think it counts as a storage area. What's good about cache is you don't have to do anything with it, the system just manages it and your job is simple. The bad news is that you don't have to do anything with it, but you may want to. For example, the next tier, RAM-based SSD, can actually be slowed down by cache. It is, after all, RAM, and in a zero-latency system a cache miss causes, well, latency. RAM-based SSDs are ideal for high write/high read environments like Oracle Undo logs, etc. ... Then you have Flash-based SSD, ideal for read-heavy environments, but they take a bit of a performance hit on writes, although they're still faster than hard disk.

As a result, before we get to hard disk, we have three flavors of memory, with more to come; somewhere in between cache and DRAM SSD, you have to mix in memory appliances that are now coming onto the market. So tier 0 is really a variety of levels of storage -- Tier 0, tier 0.5 and tier .75.

Then we get to tier one disk solutions; clustered, virtualized, monolithic, fast modular systems, and more options than you can count. All deliver the "fastest systems in the industry," which can't be true, can it? We'll need to take this one apart step by step in future entries.

Tier two for me is now for primary storage as well, but where heart-pounding performance isn't needed. It needs to deliver respectable performance, be reliable, and have similar key features of the tier one solutions like replication and advanced provisioning. But most important, it must be cost effective.

Nearline or archive storage has become very interesting, with many choices. You still have the SATA-based systems, SATA-based systems with spin-down drives, and SATA-based systems that can turn off drives. Scale is a critical issue with this tier. Reliability is often overlooked, but it shouldn't be. This archive is something you may need to pull data from 50 years from now, so constant checking of data integrity is going to be increasingly important.

Oh, and tape -- tier 4? Despite popular opinion, tape isn't dead. It is still used, still being improved upon, and still very affordable per gigabyte.

The challenge is how do you get this data moved between tiers; can you automate it? Does it make sense to? In our next entry, we will begin to talk about how to decide what data to put on what tier and how to get it there.

Track us on Twitter: http://twitter.com/storageswiss.

Subscribe to our RSS feed.

George Crump is founder of Storage Switzerland, an analyst firm focused on the virtualization and storage marketplaces. It provides strategic consulting and analysis to storage users, suppliers, and integrators. An industry veteran of more than 25 years, Crump has held engineering and sales positions at various IT industry manufacturers and integrators. Prior to Storage Switzerland, he was CTO at one of the nation's largest integrators.

Comment  | 
Print  | 
More Insights
Register for Dark Reading Newsletters
White Papers
Cartoon
Current Issue
Dark Reading Must Reads - September 25, 2014
Dark Reading's new Must Reads is a compendium of our best recent coverage of identity and access management. Learn about access control in the age of HTML5, how to improve authentication, why Active Directory is dead, and more.
Flash Poll
10 Recommendations for Outsourcing Security
10 Recommendations for Outsourcing Security
Enterprises today have a wide range of third-party options to help improve their defenses, including MSSPs, auditing and penetration testing, and DDoS protection. But are there situations in which a service provider might actually increase risk?
Video
Slideshows
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2012-5619
Published: 2014-09-29
The Sleuth Kit (TSK) 4.0.1 does not properly handle "." (dotfile) file system entries in FAT file systems and other file systems for which . is not a reserved name, which allows local users to hide activities it more difficult to conduct forensics activities, as demonstrated by Flame.

CVE-2012-5621
Published: 2014-09-29
lib/engine/components/opal/opal-call.cpp in ekiga before 4.0.0 allows remote attackers to cause a denial of service (crash) via an OPAL connection with a party name that contains invalid UTF-8 strings.

CVE-2012-6107
Published: 2014-09-29
Apache Axis2/C does not verify that the server hostname matches a domain name in the subject's Common Name (CN) or subjectAltName field of the X.509 certificate, which allows man-in-the-middle attackers to spoof SSL servers via an arbitrary valid certificate.

CVE-2012-6110
Published: 2014-09-29
bcron-exec in bcron before 0.10 does not close file descriptors associated with temporary files when running a cron job, which allows local users to modify job files and send spam messages by accessing an open file descriptor.

CVE-2013-1874
Published: 2014-09-29
Untrusted search path vulnerability in csi in Chicken before 4.8.2 allows local users to execute arbitrary code via a Trojan horse .csirc in the current working directory.

Best of the Web
Dark Reading Radio
Archived Dark Reading Radio
In our next Dark Reading Radio broadcast, we’ll take a close look at some of the latest research and practices in application security.