Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

News

8/15/2008
05:38 PM
George Crump
George Crump
Commentary
50%
50%

Oh, Tier 3...

Remember about five years or so ago when life was simple? We had fast SCSI and Fibre Channel drives for data and we had tape for backup. Seemed perfect. Then came the ATA-based drives, and you were told to move your older data to them and start sending backups to disk. Then powering the data center and storage in particular became a problem; another use for ATA, put them in stand-by mode, spin them down, put them to sleep, and then eventually turn them off. As is usually the case, the hardware i

Remember about five years or so ago when life was simple? We had fast SCSI and Fibre Channel drives for data and we had tape for backup. Seemed perfect. Then came the ATA-based drives, and you were told to move your older data to them and start sending backups to disk. Then powering the data center and storage in particular became a problem; another use for ATA, put them in stand-by mode, spin them down, put them to sleep, and then eventually turn them off. As is usually the case, the hardware is ahead of the software and there's limited automation to leverage all of this, so what's a user to do?There are so many variations on Tier 3 that it's hard to categorize this entry and catch every permutation. First, what type of data should go on Tier 3? Ideally, everything that isn't currently being accessed (old data) or copies of current data where there is value in freezing the state of that data for some reason; for example, a database archive or a copy of a PowerPoint presentation that you are going to modify heavily. However, this data does NOT include backup data. That data needs to go on another disk tier: Tier 4. Tier 3, then, is essentially data at rest, but data that might need to be accessed in the future, so you want to keep it on a medium that can still deliver that data back to you in short order. The challenge has been to understand how the various manufacturers have responded to this market. One of the first incarnations and one of the most popular still today is manufacturers are just adding shelves with ATA drives to their existing systems or adding just an external box of cheap ATA RAID. Both of these strategies have limited value, unless you have a specific need for a scratch area or something of that nature. The exception being some storage systems that can auto-migrate old data blocks to this storage on an as-needed basis. If your storage system can't do this for you automatically, either change storage systems or don't use Tier 3 storage in this manner. Regardless of the capabilities of your Tier 1 or 2 offering, where things get interesting is with systems that are focusing specifically on the data-retention market. They address key requirements like portability, scalability, density, power management, data integrity, and cost efficiencies that the ATA in shelf solutions lack. By moving (not copying) data either manually or in an automated fashion, you can move this data off primary storage, while at the same time giving yourself a data vault. When I mention a data vault or retention, the first thought is usually compliance or litigation readiness. While these are important, think of the vault from another value ... assets. As the wealth of retained information grows and the ability to index its content improves year by year, data as an asset will be a key strategic initiative in many enterprises. The common requirement in data indexing will be the ability to access that data. What (if anything) you choose to index, the application today may be different than what you choose to index with tomorrow. Having that data stored in a simple open interface file system like CIFS or NFS will be critical. Next we will finish up our "Tour of Tiers" with Tier 4.

Track us on Twitter: http://twitter.com/storageswiss.

Subscribe to our RSS feed.

George Crump is founder of Storage Switzerland, an analyst firm focused on the virtualization and storage marketplaces. It provides strategic consulting and analysis to storage users, suppliers, and integrators. An industry veteran of more than 25 years, Crump has held engineering and sales positions at various IT industry manufacturers and integrators. Prior to Storage Switzerland, he was CTO at one of the nation's largest integrators.

 

Recommended Reading:

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
COVID-19: Latest Security News & Commentary
Dark Reading Staff 8/3/2020
'BootHole' Vulnerability Exposes Secure Boot Devices to Attack
Kelly Sheridan, Staff Editor, Dark Reading,  7/29/2020
Out-of-Date and Unsupported Cloud Workloads Continue as a Common Weakness
Robert Lemos, Contributing Writer,  7/28/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Current Issue
Special Report: Computing's New Normal, a Dark Reading Perspective
This special report examines how IT security organizations have adapted to the "new normal" of computing and what the long-term effects will be. Read it and get a unique set of perspectives on issues ranging from new threats & vulnerabilities as a result of remote working to how enterprise security strategy will be affected long term.
Flash Poll
The Threat from the Internetand What Your Organization Can Do About It
The Threat from the Internetand What Your Organization Can Do About It
This report describes some of the latest attacks and threats emanating from the Internet, as well as advice and tips on how your organization can mitigate those threats before they affect your business. Download it today!
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-14310
PUBLISHED: 2020-07-31
There is an issue on grub2 before version 2.06 at function read_section_as_string(). It expects a font name to be at max UINT32_MAX - 1 length in bytes but it doesn't verify it before proceed with buffer allocation to read the value from the font value. An attacker may leverage that by crafting a ma...
CVE-2020-14311
PUBLISHED: 2020-07-31
There is an issue with grub2 before version 2.06 while handling symlink on ext filesystems. A filesystem containing a symbolic link with an inode size of UINT32_MAX causes an arithmetic overflow leading to a zero-sized memory allocation with subsequent heap-based buffer overflow.
CVE-2020-5413
PUBLISHED: 2020-07-31
Spring Integration framework provides Kryo Codec implementations as an alternative for Java (de)serialization. When Kryo is configured with default options, all unregistered classes are resolved on demand. This leads to the "deserialization gadgets" exploit when provided data contains mali...
CVE-2020-5414
PUBLISHED: 2020-07-31
VMware Tanzu Application Service for VMs (2.7.x versions prior to 2.7.19, 2.8.x versions prior to 2.8.13, and 2.9.x versions prior to 2.9.7) contains an App Autoscaler that logs the UAA admin password. This credential is redacted on VMware Tanzu Operations Manager; however, the unredacted logs are a...
CVE-2019-11286
PUBLISHED: 2020-07-31
VMware GemFire versions prior to 9.10.0, 9.9.1, 9.8.5, and 9.7.5, and VMware Tanzu GemFire for VMs versions prior to 1.11.0, 1.10.1, 1.9.2, and 1.8.2, contain a JMX service available to the network which does not properly restrict input. A remote authenticated malicious user may request against the ...