Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

News

1/12/2010
10:13 AM
George Crump
George Crump
Commentary
50%
50%

Introduction To Automated Tiering

The concept of multiple tiers of storage has been around for almost as long as there has been storage, but the subject became more discussed in early 2000 when Serial Advanced Technology Attachment (SATA) hard drives began to come to market. They were higher capacity and less expensive than their fibre channel counterparts but not as fast. The question that still plagues storage managers is how to get data to them.

The concept of multiple tiers of storage has been around for almost as long as there has been storage, but the subject became more discussed in early 2000 when Serial Advanced Technology Attachment (SATA) hard drives began to come to market. They were higher capacity and less expensive than their fibre channel counterparts but not as fast. The question that still plagues storage managers is how to get data to them.The need for data movement between tiers of storage has increased as Solid State Disk (SSD) based storage systems have become more mainstream. Now there is a high performance tier in addition to a low cost tier. Thanks to Flash technology SSDs offer enough capacity at a lower price point with amazing performance. The technology, unlike SATA, is not less expensive than fibre drives but it does deliver significant performance advantages. To maximize the SSD investment however requires that they are kept at near 100% capacity utilization but only being utilized by the most active data set. If you are paying a premium for high performance capacity, then you want to make sure that capacity is being fully and correctly utilized.

Even mechanical drives are advancing. The common fiber channel interface on drives is being replaced by Serial Attached SCSI (SAS) interfaces. This new interface standard should offer improved performance, availability and further lower drive costs when using mechanical drives for primary storage.

A new tier of storage is a cloud storage based archive. As we discuss in our Cloud Archive White Paper, using cloud storage as a final resting point for the data set may be a natural extension to an internal disk based archive. Cloud storage is moving beyond the discussion phase and into real deployments. Cloud archive storage is the inverse of SSD. Access to this storage is slower but its capacity is near limitless. Retrieval will be much slower. Ideally the only data you put into the cloud archive is data that you don't think you will need for a long period of time. Understanding your data or having systems that do becomes critical.

The challenge to these distinct tiers or classes of storage is how to move data between them. In the past ILM (Information Lifecycle Management) and HSM (Hierarchical Storage Management) were the proposed solutions. To some extent what held back adoption of these technologies was the complexity of implementation. There was client or server side agents that needed to be developed and implemented for each of the major platforms or operating systems. Each platform had its own file system intricacies to work through and most file systems did not know how to handle data that was moved on them.

Automated tiering takes the data movement decision away from the server or client and places it closer to the storage. This removes the need to develop multiple agents to support multiple file systems and as a result should make data movement more seamless. Automated tiering however can be implemented in several methods; through file virtualization, storage virtualization, smart storage controllers or a cache-like implementation. The granularity of the movement varies between LUN, file or block level migration.

The promise of automated tiering is that it will remove one of the challenges and time consuming tasks from storage managers; data placement. For many organizations it may be the only practical way to fully leverage all the new tiers of storage. We will take a detailed look at the methods and the companies that provide them in a series of upcoming entries.

Track us on Twitter: http://twitter.com/storageswiss

Subscribe to our RSS feed.

George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.

 

Recommended Reading:

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
COVID-19: Latest Security News & Commentary
Dark Reading Staff 8/3/2020
Pen Testers Who Got Arrested Doing Their Jobs Tell All
Kelly Jackson Higgins, Executive Editor at Dark Reading,  8/5/2020
Exploiting Google Cloud Platform With Ease
Dark Reading Staff 8/6/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Write a Caption, Win a Starbucks Card! Click Here
Latest Comment: This comment is waiting for review by our moderators.
Current Issue
Special Report: Computing's New Normal, a Dark Reading Perspective
This special report examines how IT security organizations have adapted to the "new normal" of computing and what the long-term effects will be. Read it and get a unique set of perspectives on issues ranging from new threats & vulnerabilities as a result of remote working to how enterprise security strategy will be affected long term.
Flash Poll
The Changing Face of Threat Intelligence
The Changing Face of Threat Intelligence
This special report takes a look at how enterprises are using threat intelligence, as well as emerging best practices for integrating threat intel into security operations and incident response. Download it today!
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-16219
PUBLISHED: 2020-08-07
Delta Electronics TPEditor Versions 1.97 and prior. An out-of-bounds read may be exploited by processing specially crafted project files. Successful exploitation of this vulnerability may allow an attacker to read/modify information, execute arbitrary code, and/or crash the application.
CVE-2020-16221
PUBLISHED: 2020-08-07
Delta Electronics TPEditor Versions 1.97 and prior. A stack-based buffer overflow may be exploited by processing a specially crafted project file. Successful exploitation of this vulnerability may allow an attacker to read/modify information, execute arbitrary code, and/or crash the application.
CVE-2020-16223
PUBLISHED: 2020-08-07
Delta Electronics TPEditor Versions 1.97 and prior. A heap-based buffer overflow may be exploited by processing a specially crafted project file. Successful exploitation of this vulnerability may allow an attacker to read/modify information, execute arbitrary code, and/or crash the application.
CVE-2020-16225
PUBLISHED: 2020-08-07
Delta Electronics TPEditor Versions 1.97 and prior. A write-what-where condition may be exploited by processing a specially crafted project file. Successful exploitation of this vulnerability may allow an attacker to read/modify information, execute arbitrary code, and/or crash the application.
CVE-2020-16227
PUBLISHED: 2020-08-07
Delta Electronics TPEditor Versions 1.97 and prior. An improper input validation may be exploited by processing a specially crafted project file not validated when the data is entered by a user. Successful exploitation of this vulnerability may allow an attacker to read/modify information, execute a...