News
1/12/2010
10:13 AM
George Crump
George Crump
Commentary
50%
50%

Introduction To Automated Tiering

The concept of multiple tiers of storage has been around for almost as long as there has been storage, but the subject became more discussed in early 2000 when Serial Advanced Technology Attachment (SATA) hard drives began to come to market. They were higher capacity and less expensive than their fibre channel counterparts but not as fast. The question that still plagues storage managers is how to get data to them.

The concept of multiple tiers of storage has been around for almost as long as there has been storage, but the subject became more discussed in early 2000 when Serial Advanced Technology Attachment (SATA) hard drives began to come to market. They were higher capacity and less expensive than their fibre channel counterparts but not as fast. The question that still plagues storage managers is how to get data to them.The need for data movement between tiers of storage has increased as Solid State Disk (SSD) based storage systems have become more mainstream. Now there is a high performance tier in addition to a low cost tier. Thanks to Flash technology SSDs offer enough capacity at a lower price point with amazing performance. The technology, unlike SATA, is not less expensive than fibre drives but it does deliver significant performance advantages. To maximize the SSD investment however requires that they are kept at near 100% capacity utilization but only being utilized by the most active data set. If you are paying a premium for high performance capacity, then you want to make sure that capacity is being fully and correctly utilized.

Even mechanical drives are advancing. The common fiber channel interface on drives is being replaced by Serial Attached SCSI (SAS) interfaces. This new interface standard should offer improved performance, availability and further lower drive costs when using mechanical drives for primary storage.

A new tier of storage is a cloud storage based archive. As we discuss in our Cloud Archive White Paper, using cloud storage as a final resting point for the data set may be a natural extension to an internal disk based archive. Cloud storage is moving beyond the discussion phase and into real deployments. Cloud archive storage is the inverse of SSD. Access to this storage is slower but its capacity is near limitless. Retrieval will be much slower. Ideally the only data you put into the cloud archive is data that you don't think you will need for a long period of time. Understanding your data or having systems that do becomes critical.

The challenge to these distinct tiers or classes of storage is how to move data between them. In the past ILM (Information Lifecycle Management) and HSM (Hierarchical Storage Management) were the proposed solutions. To some extent what held back adoption of these technologies was the complexity of implementation. There was client or server side agents that needed to be developed and implemented for each of the major platforms or operating systems. Each platform had its own file system intricacies to work through and most file systems did not know how to handle data that was moved on them.

Automated tiering takes the data movement decision away from the server or client and places it closer to the storage. This removes the need to develop multiple agents to support multiple file systems and as a result should make data movement more seamless. Automated tiering however can be implemented in several methods; through file virtualization, storage virtualization, smart storage controllers or a cache-like implementation. The granularity of the movement varies between LUN, file or block level migration.

The promise of automated tiering is that it will remove one of the challenges and time consuming tasks from storage managers; data placement. For many organizations it may be the only practical way to fully leverage all the new tiers of storage. We will take a detailed look at the methods and the companies that provide them in a series of upcoming entries.

Track us on Twitter: http://twitter.com/storageswiss

Subscribe to our RSS feed.

George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.

Comment  | 
Print  | 
More Insights
Register for Dark Reading Newsletters
White Papers
Cartoon
Current Issue
Dark Reading Tech Digest, Dec. 19, 2014
Software-defined networking can be a net plus for security. The key: Work with the network team to implement gradually, test as you go, and take the opportunity to overhaul your security strategy.
Flash Poll
10 Recommendations for Outsourcing Security
10 Recommendations for Outsourcing Security
Enterprises today have a wide range of third-party options to help improve their defenses, including MSSPs, auditing and penetration testing, and DDoS protection. But are there situations in which a service provider might actually increase risk?
Video
Slideshows
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2014-3971
Published: 2014-12-25
The CmdAuthenticate::_authenticateX509 function in db/commands/authentication_commands.cpp in mongod in MongoDB 2.6.x before 2.6.2 allows remote attackers to cause a denial of service (daemon crash) by attempting authentication with an invalid X.509 client certificate.

CVE-2014-7193
Published: 2014-12-25
The Crumb plugin before 3.0.0 for Node.js does not properly restrict token access in situations where a hapi route handler has CORS enabled, which allows remote attackers to obtain sensitive information, and potentially obtain the ability to spoof requests to non-CORS routes, via a crafted web site ...

CVE-2004-2771
Published: 2014-12-24
The expand function in fio.c in Heirloom mailx 12.5 and earlier and BSD mailx 8.1.2 and earlier allows remote attackers to execute arbitrary commands via shell metacharacters in an email address.

CVE-2014-3569
Published: 2014-12-24
The ssl23_get_client_hello function in s23_srvr.c in OpenSSL 1.0.1j does not properly handle attempts to use unsupported protocols, which allows remote attackers to cause a denial of service (NULL pointer dereference and daemon crash) via an unexpected handshake, as demonstrated by an SSLv3 handshak...

CVE-2014-4322
Published: 2014-12-24
drivers/misc/qseecom.c in the QSEECOM driver for the Linux kernel 3.x, as used in Qualcomm Innovation Center (QuIC) Android contributions for MSM devices and other products, does not validate certain offset, length, and base values within an ioctl call, which allows attackers to gain privileges or c...

Best of the Web
Dark Reading Radio
Archived Dark Reading Radio
Join us Wednesday, Dec. 17 at 1 p.m. Eastern Time to hear what employers are really looking for in a chief information security officer -- it may not be what you think.