Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.
In the never-ending world of tiered storage, it really breaks down into two types of storage; transactional (active) and passive storage. For obvious reasons these two worlds overlap, but it is surprising how many levels of granularity there are within these tiers. Gone are the days of three tiers. There are more tiers of storage than ever, so it's helpful to see where we are.
August 11, 2008
3 Min Read
In the never-ending world of tiered storage, it really breaks down into two types of storage; transactional (active) and passive storage. For obvious reasons these two worlds overlap, but it is surprising how many levels of granularity there are within these tiers. Gone are the days of three tiers. There are more tiers of storage than ever, so it's helpful to see where we are.What tiers are there? The industry, myself included, has put a whole number by each tier and everyone has their own take. Years ago, for me it used to be; tier one for primary storage, tier two for near-line storage, and tier three for tape. Things were simpler then, but it wasn't going to work. We needed something faster than tier one, something slower and more efficient than tier two, and something more accessible than tier three.
On the high-performance side you have storage controller cache. Yes, cache. Its big enough now that I think it counts as a storage area. What's good about cache is you don't have to do anything with it, the system just manages it and your job is simple. The bad news is that you don't have to do anything with it, but you may want to. For example, the next tier, RAM-based SSD, can actually be slowed down by cache. It is, after all, RAM, and in a zero-latency system a cache miss causes, well, latency. RAM-based SSDs are ideal for high write/high read environments like Oracle Undo logs, etc. ... Then you have Flash-based SSD, ideal for read-heavy environments, but they take a bit of a performance hit on writes, although they're still faster than hard disk.
As a result, before we get to hard disk, we have three flavors of memory, with more to come; somewhere in between cache and DRAM SSD, you have to mix in memory appliances that are now coming onto the market. So tier 0 is really a variety of levels of storage -- Tier 0, tier 0.5 and tier .75.
Then we get to tier one disk solutions; clustered, virtualized, monolithic, fast modular systems, and more options than you can count. All deliver the "fastest systems in the industry," which can't be true, can it? We'll need to take this one apart step by step in future entries.
Tier two for me is now for primary storage as well, but where heart-pounding performance isn't needed. It needs to deliver respectable performance, be reliable, and have similar key features of the tier one solutions like replication and advanced provisioning. But most important, it must be cost effective.
Nearline or archive storage has become very interesting, with many choices. You still have the SATA-based systems, SATA-based systems with spin-down drives, and SATA-based systems that can turn off drives. Scale is a critical issue with this tier. Reliability is often overlooked, but it shouldn't be. This archive is something you may need to pull data from 50 years from now, so constant checking of data integrity is going to be increasingly important.
Oh, and tape -- tier 4? Despite popular opinion, tape isn't dead. It is still used, still being improved upon, and still very affordable per gigabyte.
The challenge is how do you get this data moved between tiers; can you automate it? Does it make sense to? In our next entry, we will begin to talk about how to decide what data to put on what tier and how to get it there.
Track us on Twitter: http://twitter.com/storageswiss.
Subscribe to our RSS feed.
George Crump is founder of Storage Switzerland, an analyst firm focused on the virtualization and storage marketplaces. It provides strategic consulting and analysis to storage users, suppliers, and integrators. An industry veteran of more than 25 years, Crump has held engineering and sales positions at various IT industry manufacturers and integrators. Prior to Storage Switzerland, he was CTO at one of the nation's largest integrators.
About the Author(s)
President, Storage Switzerland
George Crump is president and founder of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. With 25 years of experience designing storage solutions for datacenters across the US, he has seen the birth of such technologies as RAID, NAS, and SAN. Prior to founding Storage Switzerland, he was CTO at one the nation’s largest storage integrators, where he was in charge of technology testing, integration, and product selection. George is responsible for the storage blog on InformationWeek's website and is a regular contributor to publications such as Byte and Switch, SearchStorage, eWeek, SearchServerVirtualizaiton, and SearchDataBackup.
You May Also Like
A screen displaying many different types of charts and graphs to show what data is being analyzed.Cybersecurity Analytics