News
1/12/2010
10:13 AM
George Crump
George Crump
Commentary
50%
50%

Introduction To Automated Tiering

The concept of multiple tiers of storage has been around for almost as long as there has been storage, but the subject became more discussed in early 2000 when Serial Advanced Technology Attachment (SATA) hard drives began to come to market. They were higher capacity and less expensive than their fibre channel counterparts but not as fast. The question that still plagues storage managers is how to get data to them.

The concept of multiple tiers of storage has been around for almost as long as there has been storage, but the subject became more discussed in early 2000 when Serial Advanced Technology Attachment (SATA) hard drives began to come to market. They were higher capacity and less expensive than their fibre channel counterparts but not as fast. The question that still plagues storage managers is how to get data to them.The need for data movement between tiers of storage has increased as Solid State Disk (SSD) based storage systems have become more mainstream. Now there is a high performance tier in addition to a low cost tier. Thanks to Flash technology SSDs offer enough capacity at a lower price point with amazing performance. The technology, unlike SATA, is not less expensive than fibre drives but it does deliver significant performance advantages. To maximize the SSD investment however requires that they are kept at near 100% capacity utilization but only being utilized by the most active data set. If you are paying a premium for high performance capacity, then you want to make sure that capacity is being fully and correctly utilized.

Even mechanical drives are advancing. The common fiber channel interface on drives is being replaced by Serial Attached SCSI (SAS) interfaces. This new interface standard should offer improved performance, availability and further lower drive costs when using mechanical drives for primary storage.

A new tier of storage is a cloud storage based archive. As we discuss in our Cloud Archive White Paper, using cloud storage as a final resting point for the data set may be a natural extension to an internal disk based archive. Cloud storage is moving beyond the discussion phase and into real deployments. Cloud archive storage is the inverse of SSD. Access to this storage is slower but its capacity is near limitless. Retrieval will be much slower. Ideally the only data you put into the cloud archive is data that you don't think you will need for a long period of time. Understanding your data or having systems that do becomes critical.

The challenge to these distinct tiers or classes of storage is how to move data between them. In the past ILM (Information Lifecycle Management) and HSM (Hierarchical Storage Management) were the proposed solutions. To some extent what held back adoption of these technologies was the complexity of implementation. There was client or server side agents that needed to be developed and implemented for each of the major platforms or operating systems. Each platform had its own file system intricacies to work through and most file systems did not know how to handle data that was moved on them.

Automated tiering takes the data movement decision away from the server or client and places it closer to the storage. This removes the need to develop multiple agents to support multiple file systems and as a result should make data movement more seamless. Automated tiering however can be implemented in several methods; through file virtualization, storage virtualization, smart storage controllers or a cache-like implementation. The granularity of the movement varies between LUN, file or block level migration.

The promise of automated tiering is that it will remove one of the challenges and time consuming tasks from storage managers; data placement. For many organizations it may be the only practical way to fully leverage all the new tiers of storage. We will take a detailed look at the methods and the companies that provide them in a series of upcoming entries.

Track us on Twitter: http://twitter.com/storageswiss

Subscribe to our RSS feed.

George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.

Comment  | 
Print  | 
More Insights
Register for Dark Reading Newsletters
White Papers
Cartoon
Current Issue
Dark Reading December Tech Digest
Experts weigh in on the pros and cons of end-user security training.
Flash Poll
10 Recommendations for Outsourcing Security
10 Recommendations for Outsourcing Security
Enterprises today have a wide range of third-party options to help improve their defenses, including MSSPs, auditing and penetration testing, and DDoS protection. But are there situations in which a service provider might actually increase risk?
Video
Slideshows
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2014-4807
Published: 2014-11-22
Sterling Order Management in IBM Sterling Selling and Fulfillment Suite 9.3.0 before FP8 allows remote authenticated users to cause a denial of service (CPU consumption) via a '\0' character.

CVE-2014-6183
Published: 2014-11-22
IBM Security Network Protection 5.1 before 5.1.0.0 FP13, 5.1.1 before 5.1.1.0 FP8, 5.1.2 before 5.1.2.0 FP9, 5.1.2.1 before FP5, 5.2 before 5.2.0.0 FP5, and 5.3 before 5.3.0.0 FP1 on XGS devices allows remote authenticated users to execute arbitrary commands via unspecified vectors.

CVE-2014-8626
Published: 2014-11-22
Stack-based buffer overflow in the date_from_ISO8601 function in ext/xmlrpc/libxmlrpc/xmlrpc.c in PHP before 5.2.7 allows remote attackers to cause a denial of service (application crash) or possibly execute arbitrary code by including a timezone field in a date, leading to improper XML-RPC encoding...

CVE-2014-8710
Published: 2014-11-22
The decompress_sigcomp_message function in epan/sigcomp-udvm.c in the SigComp UDVM dissector in Wireshark 1.10.x before 1.10.11 allows remote attackers to cause a denial of service (buffer over-read and application crash) via a crafted packet.

CVE-2014-8711
Published: 2014-11-22
Multiple integer overflows in epan/dissectors/packet-amqp.c in the AMQP dissector in Wireshark 1.10.x before 1.10.11 and 1.12.x before 1.12.2 allow remote attackers to cause a denial of service (application crash) via a crafted amqp_0_10 PDU in a packet.

Best of the Web
Dark Reading Radio
Archived Dark Reading Radio
Now that the holiday season is about to begin both online and in stores, will this be yet another season of nonstop gifting to cybercriminals?