News
1/4/2010
10:48 AM
George Crump
George Crump
Commentary
50%
50%

Four Tiers For The New Decade

The storage component is changing, becoming either dramatically faster with Solid State Disk (SSD) technology or fundamentally more cost effective thanks to capacity-efficient disk archiving or overhead-efficient cloud storage. In addition all current storage will still need to be managed. A four-tier storage strategy will allow storage managers to develop a storage environment that is both cost efficient and meets increasing performance demands.

The storage component is changing, becoming either dramatically faster with Solid State Disk (SSD) technology or fundamentally more cost effective thanks to capacity-efficient disk archiving or overhead-efficient cloud storage. In addition all current storage will still need to be managed. A four-tier storage strategy will allow storage managers to develop a storage environment that is both cost efficient and meets increasing performance demands.Ironically the last decade kicked off with the coming of SATA drive technology and the idea of tiered storage and, dare I say it, ILM (Information Lifecycle Management). While the initiatives had merit, part of what doomed them to failure was a lack of need by IT. Now however things have changed. The rapid growth of unstructured data (data not in a database) in most organizations in requiring longer and more managed retention. At the same time database applications as well as applications with incredibly high user counts thanks to web 2.0 are causing performance problems. Finally the unabated rollout of server virtualization is moving operating system data to the SAN or NAS to leverage the flexibility that a virtualized server environment can bring.

As I mentioned earlier there is also a more dramatic difference in the tiers of storage available now. SSD is exponentially faster but also somewhat more expensive. If the investment is made in SSD you want to make sure the right data is on that tier for the right amount of time; while it is immediately active. At the other end of the spectrum is archive storage designed to be cost effective, scalable, capacity optimized and power efficient. Finally cloud storage as an archive has a role to play as possibly the even longer term or permanent resting ground for data. Somewhere in the middle of SSD and Archive are traditional SAS based mechanical drives that will store near-active data or, as we discuss in our Visual SSD Readiness Guide, data from applications that can't benefit from SSD's speed.

With four tiers of storage available to the storage manager that each have their own justifiable differentiations, the missing ingredient is how to decide which set of data should go where? Should this be a manual process or is this something that should be automated? Over the next several entries we will examine some of the options available to storage managers and how they might help them develop a four tier storage strategy that maximizes cost, performance and reliability.

Track us on Twitter: http://twitter.com/storageswiss

Subscribe to our RSS feed.

George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.

Comment  | 
Print  | 
More Insights
Register for Dark Reading Newsletters
White Papers
Cartoon
Current Issue
Dark Reading Tech Digest, Dec. 19, 2014
Software-defined networking can be a net plus for security. The key: Work with the network team to implement gradually, test as you go, and take the opportunity to overhaul your security strategy.
Flash Poll
10 Recommendations for Outsourcing Security
10 Recommendations for Outsourcing Security
Enterprises today have a wide range of third-party options to help improve their defenses, including MSSPs, auditing and penetration testing, and DDoS protection. But are there situations in which a service provider might actually increase risk?
Video
Slideshows
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2014-8917
Published: 2015-01-28
Multiple cross-site scripting (XSS) vulnerabilities in (1) dojox/form/resources/uploader.swf (aka upload.swf), (2) dojox/form/resources/fileuploader.swf (aka fileupload.swf), (3) dojox/av/resources/audio.swf, and (4) dojox/av/resources/video.swf in the IBM Dojo Toolkit, as used in IBM Social Media A...

CVE-2014-8920
Published: 2015-01-28
Buffer overflow in the Data Transfer Program in IBM i Access 5770-XE1 5R4, 6.1, and 7.1 on Windows allows local users to gain privileges via unspecified vectors.

CVE-2015-0235
Published: 2015-01-28
Heap-based buffer overflow in the __nss_hostname_digits_dots function in glibc 2.2, and other 2.x versions before 2.18, allows context-dependent attackers to execute arbitrary code via vectors related to the (1) gethostbyname or (2) gethostbyname2 function, aka "GHOST."

CVE-2015-0312
Published: 2015-01-28
Double free vulnerability in Adobe Flash Player before 13.0.0.264 and 14.x through 16.x before 16.0.0.296 on Windows and OS X and before 11.2.202.440 on Linux allows attackers to execute arbitrary code via unspecified vectors.

CVE-2015-0581
Published: 2015-01-28
The XML parser in Cisco Prime Service Catalog before 10.1 allows remote authenticated users to read arbitrary files or cause a denial of service (CPU and memory consumption) via an external entity declaration in conjunction with an entity reference, as demonstrated by reading private keys, related t...

Best of the Web
Dark Reading Radio
Archived Dark Reading Radio
If youíre a security professional, youíve probably been asked many questions about the December attack on Sony. On Jan. 21 at 1pm eastern, you can join a special, one-hour Dark Reading Radio discussion devoted to the Sony hack and the issues that may arise from it.