Remember about five years or so ago when life was simple? We had fast SCSI and Fibre Channel drives for data and we had tape for backup. Seemed perfect. Then came the ATA-based drives, and you were told to move your older data to them and start sending backups to disk. Then powering the data center and storage in particular became a problem; another use for ATA, put them in stand-by mode, spin them down, put them to sleep, and then eventually turn them off. As is usually the case, the hardware is ahead of the software and there's limited automation to leverage all of this, so what's a user to do?There are so many variations on Tier 3 that it's hard to categorize this entry and catch every permutation. First, what type of data should go on Tier 3? Ideally, everything that isn't currently being accessed (old data) or copies of current data where there is value in freezing the state of that data for some reason; for example, a database archive or a copy of a PowerPoint presentation that you are going to modify heavily. However, this data does NOT include backup data. That data needs to go on another disk tier: Tier 4.
Tier 3, then, is essentially data at rest, but data that might need to be accessed in the future, so you want to keep it on a medium that can still deliver that data back to you in short order. The challenge has been to understand how the various manufacturers have responded to this market.
One of the first incarnations and one of the most popular still today is manufacturers are just adding shelves with ATA drives to their existing systems or adding just an external box of cheap ATA RAID. Both of these strategies have limited value, unless you have a specific need for a scratch area or something of that nature. The exception being some storage systems that can auto-migrate old data blocks to this storage on an as-needed basis. If your storage system can't do this for you automatically, either change storage systems or don't use Tier 3 storage in this manner.
Regardless of the capabilities of your Tier 1 or 2 offering, where things get interesting is with systems that are focusing specifically on the data-retention market. They address key requirements like portability, scalability, density, power management, data integrity, and cost efficiencies that the ATA in shelf solutions lack. By moving (not copying) data either manually or in an automated fashion, you can move this data off primary storage, while at the same time giving yourself a data vault.
When I mention a data vault or retention, the first thought is usually compliance or litigation readiness. While these are important, think of the vault from another value ... assets. As the wealth of retained information grows and the ability to index its content improves year by year, data as an asset will be a key strategic initiative in many enterprises. The common requirement in data indexing will be the ability to access that data. What (if anything) you choose to index, the application today may be different than what you choose to index with tomorrow. Having that data stored in a simple open interface file system like CIFS or NFS will be critical.
Next we will finish up our "Tour of Tiers" with Tier 4.
Track us on Twitter: http://twitter.com/storageswiss.
Subscribe to our RSS feed.
George Crump is founder of Storage Switzerland, an analyst firm focused on the virtualization and storage marketplaces. It provides strategic consulting and analysis to storage users, suppliers, and integrators. An industry veteran of more than 25 years, Crump has held engineering and sales positions at various IT industry manufacturers and integrators. Prior to Storage Switzerland, he was CTO at one of the nation's largest integrators.