News
1/12/2010
10:13 AM
George Crump
George Crump
Commentary
50%
50%

Introduction To Automated Tiering

The concept of multiple tiers of storage has been around for almost as long as there has been storage, but the subject became more discussed in early 2000 when Serial Advanced Technology Attachment (SATA) hard drives began to come to market. They were higher capacity and less expensive than their fibre channel counterparts but not as fast. The question that still plagues storage managers is how to get data to them.

The concept of multiple tiers of storage has been around for almost as long as there has been storage, but the subject became more discussed in early 2000 when Serial Advanced Technology Attachment (SATA) hard drives began to come to market. They were higher capacity and less expensive than their fibre channel counterparts but not as fast. The question that still plagues storage managers is how to get data to them.The need for data movement between tiers of storage has increased as Solid State Disk (SSD) based storage systems have become more mainstream. Now there is a high performance tier in addition to a low cost tier. Thanks to Flash technology SSDs offer enough capacity at a lower price point with amazing performance. The technology, unlike SATA, is not less expensive than fibre drives but it does deliver significant performance advantages. To maximize the SSD investment however requires that they are kept at near 100% capacity utilization but only being utilized by the most active data set. If you are paying a premium for high performance capacity, then you want to make sure that capacity is being fully and correctly utilized.

Even mechanical drives are advancing. The common fiber channel interface on drives is being replaced by Serial Attached SCSI (SAS) interfaces. This new interface standard should offer improved performance, availability and further lower drive costs when using mechanical drives for primary storage.

A new tier of storage is a cloud storage based archive. As we discuss in our Cloud Archive White Paper, using cloud storage as a final resting point for the data set may be a natural extension to an internal disk based archive. Cloud storage is moving beyond the discussion phase and into real deployments. Cloud archive storage is the inverse of SSD. Access to this storage is slower but its capacity is near limitless. Retrieval will be much slower. Ideally the only data you put into the cloud archive is data that you don't think you will need for a long period of time. Understanding your data or having systems that do becomes critical.

The challenge to these distinct tiers or classes of storage is how to move data between them. In the past ILM (Information Lifecycle Management) and HSM (Hierarchical Storage Management) were the proposed solutions. To some extent what held back adoption of these technologies was the complexity of implementation. There was client or server side agents that needed to be developed and implemented for each of the major platforms or operating systems. Each platform had its own file system intricacies to work through and most file systems did not know how to handle data that was moved on them.

Automated tiering takes the data movement decision away from the server or client and places it closer to the storage. This removes the need to develop multiple agents to support multiple file systems and as a result should make data movement more seamless. Automated tiering however can be implemented in several methods; through file virtualization, storage virtualization, smart storage controllers or a cache-like implementation. The granularity of the movement varies between LUN, file or block level migration.

The promise of automated tiering is that it will remove one of the challenges and time consuming tasks from storage managers; data placement. For many organizations it may be the only practical way to fully leverage all the new tiers of storage. We will take a detailed look at the methods and the companies that provide them in a series of upcoming entries.

Track us on Twitter: http://twitter.com/storageswiss

Subscribe to our RSS feed.

George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.

Comment  | 
Print  | 
More Insights
Register for Dark Reading Newsletters
White Papers
Cartoon
Current Issue
Flash Poll
10 Recommendations for Outsourcing Security
10 Recommendations for Outsourcing Security
Enterprises today have a wide range of third-party options to help improve their defenses, including MSSPs, auditing and penetration testing, and DDoS protection. But are there situations in which a service provider might actually increase risk?
Video
Slideshows
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2011-4403
Published: 2015-04-24
Multiple cross-site request forgery (CSRF) vulnerabilities in Zen Cart 1.3.9h allow remote attackers to hijack the authentication of administrators for requests that (1) delete a product via a delete_product_confirm action to product.php or (2) disable a product via a setflag action to categories.ph...

CVE-2012-2930
Published: 2015-04-24
Multiple cross-site request forgery (CSRF) vulnerabilities in TinyWebGallery (TWG) before 1.8.8 allow remote attackers to hijack the authentication of administrators for requests that (1) add a user via an adduser action to admin/index.php or (2) conduct static PHP code injection attacks in .htusers...

CVE-2012-2932
Published: 2015-04-24
Multiple cross-site scripting (XSS) vulnerabilities in TinyWebGallery (TWG) before 1.8.8 allow remote attackers to inject arbitrary web script or HTML via the (1) selitems[] parameter in a copy, (2) chmod, or (3) arch action to admin/index.php or (4) searchitem parameter in a search action to admin/...

CVE-2012-5451
Published: 2015-04-24
Multiple stack-based buffer overflows in HttpUtils.dll in TVMOBiLi before 2.1.0.3974 allow remote attackers to cause a denial of service (tvMobiliService service crash) via a long string in a (1) GET or (2) HEAD request to TCP port 30888.

CVE-2015-0297
Published: 2015-04-24
Red Hat JBoss Operations Network 3.3.1 does not properly restrict access to certain APIs, which allows remote attackers to execute arbitrary Java methos via the (1) ServerInvokerServlet or (2) SchedulerService or (3) cause a denial of service (disk consumption) via the ContentManager.

Dark Reading Radio
Archived Dark Reading Radio
Join security and risk expert John Pironti and Dark Reading Editor-in-Chief Tim Wilson for a live online discussion of the sea-changing shift in security strategy and the many ways it is affecting IT and business.