News
8/11/2008
04:46 PM
George Crump
George Crump
Commentary
50%
50%

Tiered Storage Redefined

In the never-ending world of tiered storage, it really breaks down into two types of storage; transactional (active) and passive storage. For obvious reasons these two worlds overlap, but it is surprising how many levels of granularity there are within these tiers. Gone are the days of three tiers. There are more tiers of storage than ever, so it's helpful to see where we are.

In the never-ending world of tiered storage, it really breaks down into two types of storage; transactional (active) and passive storage. For obvious reasons these two worlds overlap, but it is surprising how many levels of granularity there are within these tiers. Gone are the days of three tiers. There are more tiers of storage than ever, so it's helpful to see where we are.What tiers are there? The industry, myself included, has put a whole number by each tier and everyone has their own take. Years ago, for me it used to be; tier one for primary storage, tier two for near-line storage, and tier three for tape. Things were simpler then, but it wasn't going to work. We needed something faster than tier one, something slower and more efficient than tier two, and something more accessible than tier three.

On the high-performance side you have storage controller cache. Yes, cache. Its big enough now that I think it counts as a storage area. What's good about cache is you don't have to do anything with it, the system just manages it and your job is simple. The bad news is that you don't have to do anything with it, but you may want to. For example, the next tier, RAM-based SSD, can actually be slowed down by cache. It is, after all, RAM, and in a zero-latency system a cache miss causes, well, latency. RAM-based SSDs are ideal for high write/high read environments like Oracle Undo logs, etc. ... Then you have Flash-based SSD, ideal for read-heavy environments, but they take a bit of a performance hit on writes, although they're still faster than hard disk.

As a result, before we get to hard disk, we have three flavors of memory, with more to come; somewhere in between cache and DRAM SSD, you have to mix in memory appliances that are now coming onto the market. So tier 0 is really a variety of levels of storage -- Tier 0, tier 0.5 and tier .75.

Then we get to tier one disk solutions; clustered, virtualized, monolithic, fast modular systems, and more options than you can count. All deliver the "fastest systems in the industry," which can't be true, can it? We'll need to take this one apart step by step in future entries.

Tier two for me is now for primary storage as well, but where heart-pounding performance isn't needed. It needs to deliver respectable performance, be reliable, and have similar key features of the tier one solutions like replication and advanced provisioning. But most important, it must be cost effective.

Nearline or archive storage has become very interesting, with many choices. You still have the SATA-based systems, SATA-based systems with spin-down drives, and SATA-based systems that can turn off drives. Scale is a critical issue with this tier. Reliability is often overlooked, but it shouldn't be. This archive is something you may need to pull data from 50 years from now, so constant checking of data integrity is going to be increasingly important.

Oh, and tape -- tier 4? Despite popular opinion, tape isn't dead. It is still used, still being improved upon, and still very affordable per gigabyte.

The challenge is how do you get this data moved between tiers; can you automate it? Does it make sense to? In our next entry, we will begin to talk about how to decide what data to put on what tier and how to get it there.

Track us on Twitter: http://twitter.com/storageswiss.

Subscribe to our RSS feed.

George Crump is founder of Storage Switzerland, an analyst firm focused on the virtualization and storage marketplaces. It provides strategic consulting and analysis to storage users, suppliers, and integrators. An industry veteran of more than 25 years, Crump has held engineering and sales positions at various IT industry manufacturers and integrators. Prior to Storage Switzerland, he was CTO at one of the nation's largest integrators.

Comment  | 
Print  | 
More Insights
Register for Dark Reading Newsletters
White Papers
Cartoon
Current Issue
Dark Reading December Tech Digest
Experts weigh in on the pros and cons of end-user security training.
Flash Poll
10 Recommendations for Outsourcing Security
10 Recommendations for Outsourcing Security
Enterprises today have a wide range of third-party options to help improve their defenses, including MSSPs, auditing and penetration testing, and DDoS protection. But are there situations in which a service provider might actually increase risk?
Video
Slideshows
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2014-3407
Published: 2014-11-27
The SSL VPN implementation in Cisco Adaptive Security Appliance (ASA) Software 9.3(.2) and earlier does not properly allocate memory blocks during HTTP packet handling, which allows remote attackers to cause a denial of service (memory consumption) via crafted packets, aka Bug ID CSCuq68888.

CVE-2014-4829
Published: 2014-11-27
Cross-site request forgery (CSRF) vulnerability in IBM Security QRadar SIEM and QRadar Risk Manager 7.1 before MR2 Patch 9 and 7.2 before 7.2.4 Patch 1, and QRadar Vulnerability Manager 7.2 before 7.2.4 Patch 1, allows remote attackers to hijack the authentication of arbitrary users for requests tha...

CVE-2014-4831
Published: 2014-11-27
IBM Security QRadar SIEM and QRadar Risk Manager 7.1 before MR2 Patch 9 and 7.2 before 7.2.4 Patch 1, and QRadar Vulnerability Manager 7.2 before 7.2.4 Patch 1, allow remote attackers to hijack sessions via unspecified vectors.

CVE-2014-4832
Published: 2014-11-27
IBM Security QRadar SIEM and QRadar Risk Manager 7.1 before MR2 Patch 9 and 7.2 before 7.2.4 Patch 1, and QRadar Vulnerability Manager 7.2 before 7.2.4 Patch 1, allow remote attackers to obtain sensitive cookie information by sniffing the network during an HTTP session.

CVE-2014-4883
Published: 2014-11-27
resolv.c in the DNS resolver in uIP, and dns.c in the DNS resolver in lwIP 1.4.1 and earlier, does not use random values for ID fields and source ports of DNS query packets, which makes it easier for man-in-the-middle attackers to conduct cache-poisoning attacks via spoofed reply packets.

Best of the Web
Dark Reading Radio
Archived Dark Reading Radio
Now that the holiday season is about to begin both online and in stores, will this be yet another season of nonstop gifting to cybercriminals?