News
8/15/2008
05:38 PM
George Crump
George Crump
Commentary
50%
50%

Oh, Tier 3...

Remember about five years or so ago when life was simple? We had fast SCSI and Fibre Channel drives for data and we had tape for backup. Seemed perfect. Then came the ATA-based drives, and you were told to move your older data to them and start sending backups to disk. Then powering the data center and storage in particular became a problem; another use for ATA, put them in stand-by mode, spin them down, put them to sleep, and then eventually turn them off. As is usually the case, the hardware i

Remember about five years or so ago when life was simple? We had fast SCSI and Fibre Channel drives for data and we had tape for backup. Seemed perfect. Then came the ATA-based drives, and you were told to move your older data to them and start sending backups to disk. Then powering the data center and storage in particular became a problem; another use for ATA, put them in stand-by mode, spin them down, put them to sleep, and then eventually turn them off. As is usually the case, the hardware is ahead of the software and there's limited automation to leverage all of this, so what's a user to do?There are so many variations on Tier 3 that it's hard to categorize this entry and catch every permutation. First, what type of data should go on Tier 3? Ideally, everything that isn't currently being accessed (old data) or copies of current data where there is value in freezing the state of that data for some reason; for example, a database archive or a copy of a PowerPoint presentation that you are going to modify heavily. However, this data does NOT include backup data. That data needs to go on another disk tier: Tier 4. Tier 3, then, is essentially data at rest, but data that might need to be accessed in the future, so you want to keep it on a medium that can still deliver that data back to you in short order. The challenge has been to understand how the various manufacturers have responded to this market. One of the first incarnations and one of the most popular still today is manufacturers are just adding shelves with ATA drives to their existing systems or adding just an external box of cheap ATA RAID. Both of these strategies have limited value, unless you have a specific need for a scratch area or something of that nature. The exception being some storage systems that can auto-migrate old data blocks to this storage on an as-needed basis. If your storage system can't do this for you automatically, either change storage systems or don't use Tier 3 storage in this manner. Regardless of the capabilities of your Tier 1 or 2 offering, where things get interesting is with systems that are focusing specifically on the data-retention market. They address key requirements like portability, scalability, density, power management, data integrity, and cost efficiencies that the ATA in shelf solutions lack. By moving (not copying) data either manually or in an automated fashion, you can move this data off primary storage, while at the same time giving yourself a data vault. When I mention a data vault or retention, the first thought is usually compliance or litigation readiness. While these are important, think of the vault from another value ... assets. As the wealth of retained information grows and the ability to index its content improves year by year, data as an asset will be a key strategic initiative in many enterprises. The common requirement in data indexing will be the ability to access that data. What (if anything) you choose to index, the application today may be different than what you choose to index with tomorrow. Having that data stored in a simple open interface file system like CIFS or NFS will be critical. Next we will finish up our "Tour of Tiers" with Tier 4.

Track us on Twitter: http://twitter.com/storageswiss.

Subscribe to our RSS feed.

George Crump is founder of Storage Switzerland, an analyst firm focused on the virtualization and storage marketplaces. It provides strategic consulting and analysis to storage users, suppliers, and integrators. An industry veteran of more than 25 years, Crump has held engineering and sales positions at various IT industry manufacturers and integrators. Prior to Storage Switzerland, he was CTO at one of the nation's largest integrators.

Comment  | 
Print  | 
More Insights
Register for Dark Reading Newsletters
White Papers
Cartoon
Current Issue
Dark Reading December Tech Digest
Experts weigh in on the pros and cons of end-user security training.
Flash Poll
10 Recommendations for Outsourcing Security
10 Recommendations for Outsourcing Security
Enterprises today have a wide range of third-party options to help improve their defenses, including MSSPs, auditing and penetration testing, and DDoS protection. But are there situations in which a service provider might actually increase risk?
Video
Slideshows
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2014-4807
Published: 2014-11-22
Sterling Order Management in IBM Sterling Selling and Fulfillment Suite 9.3.0 before FP8 allows remote authenticated users to cause a denial of service (CPU consumption) via a '\0' character.

CVE-2014-6183
Published: 2014-11-22
IBM Security Network Protection 5.1 before 5.1.0.0 FP13, 5.1.1 before 5.1.1.0 FP8, 5.1.2 before 5.1.2.0 FP9, 5.1.2.1 before FP5, 5.2 before 5.2.0.0 FP5, and 5.3 before 5.3.0.0 FP1 on XGS devices allows remote authenticated users to execute arbitrary commands via unspecified vectors.

CVE-2014-8626
Published: 2014-11-22
Stack-based buffer overflow in the date_from_ISO8601 function in ext/xmlrpc/libxmlrpc/xmlrpc.c in PHP before 5.2.7 allows remote attackers to cause a denial of service (application crash) or possibly execute arbitrary code by including a timezone field in a date, leading to improper XML-RPC encoding...

CVE-2014-8710
Published: 2014-11-22
The decompress_sigcomp_message function in epan/sigcomp-udvm.c in the SigComp UDVM dissector in Wireshark 1.10.x before 1.10.11 allows remote attackers to cause a denial of service (buffer over-read and application crash) via a crafted packet.

CVE-2014-8711
Published: 2014-11-22
Multiple integer overflows in epan/dissectors/packet-amqp.c in the AMQP dissector in Wireshark 1.10.x before 1.10.11 and 1.12.x before 1.12.2 allow remote attackers to cause a denial of service (application crash) via a crafted amqp_0_10 PDU in a packet.

Best of the Web
Dark Reading Radio
Archived Dark Reading Radio
Now that the holiday season is about to begin both online and in stores, will this be yet another season of nonstop gifting to cybercriminals?