News
1/4/2010
10:48 AM
George Crump
George Crump
Commentary
50%
50%

Four Tiers For The New Decade

The storage component is changing, becoming either dramatically faster with Solid State Disk (SSD) technology or fundamentally more cost effective thanks to capacity-efficient disk archiving or overhead-efficient cloud storage. In addition all current storage will still need to be managed. A four-tier storage strategy will allow storage managers to develop a storage environment that is both cost efficient and meets increasing performance demands.

The storage component is changing, becoming either dramatically faster with Solid State Disk (SSD) technology or fundamentally more cost effective thanks to capacity-efficient disk archiving or overhead-efficient cloud storage. In addition all current storage will still need to be managed. A four-tier storage strategy will allow storage managers to develop a storage environment that is both cost efficient and meets increasing performance demands.Ironically the last decade kicked off with the coming of SATA drive technology and the idea of tiered storage and, dare I say it, ILM (Information Lifecycle Management). While the initiatives had merit, part of what doomed them to failure was a lack of need by IT. Now however things have changed. The rapid growth of unstructured data (data not in a database) in most organizations in requiring longer and more managed retention. At the same time database applications as well as applications with incredibly high user counts thanks to web 2.0 are causing performance problems. Finally the unabated rollout of server virtualization is moving operating system data to the SAN or NAS to leverage the flexibility that a virtualized server environment can bring.

As I mentioned earlier there is also a more dramatic difference in the tiers of storage available now. SSD is exponentially faster but also somewhat more expensive. If the investment is made in SSD you want to make sure the right data is on that tier for the right amount of time; while it is immediately active. At the other end of the spectrum is archive storage designed to be cost effective, scalable, capacity optimized and power efficient. Finally cloud storage as an archive has a role to play as possibly the even longer term or permanent resting ground for data. Somewhere in the middle of SSD and Archive are traditional SAS based mechanical drives that will store near-active data or, as we discuss in our Visual SSD Readiness Guide, data from applications that can't benefit from SSD's speed.

With four tiers of storage available to the storage manager that each have their own justifiable differentiations, the missing ingredient is how to decide which set of data should go where? Should this be a manual process or is this something that should be automated? Over the next several entries we will examine some of the options available to storage managers and how they might help them develop a four tier storage strategy that maximizes cost, performance and reliability.

Track us on Twitter: http://twitter.com/storageswiss

Subscribe to our RSS feed.

George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.

Comment  | 
Print  | 
More Insights
Register for Dark Reading Newsletters
White Papers
Cartoon
Current Issue
Flash Poll
10 Recommendations for Outsourcing Security
10 Recommendations for Outsourcing Security
Enterprises today have a wide range of third-party options to help improve their defenses, including MSSPs, auditing and penetration testing, and DDoS protection. But are there situations in which a service provider might actually increase risk?
Video
Slideshows
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2014-6090
Published: 2015-04-27
Multiple cross-site request forgery (CSRF) vulnerabilities in the (1) DataMappingEditorCommands, (2) DatastoreEditorCommands, and (3) IEGEditorCommands servlets in IBM Curam Social Program Management (SPM) 5.2 SP6 before EP6, 6.0 SP2 before EP26, 6.0.3 before 6.0.3.0 iFix8, 6.0.4 before 6.0.4.5 iFix...

CVE-2014-6092
Published: 2015-04-27
IBM Curam Social Program Management (SPM) 5.2 before SP6 EP6, 6.0 SP2 before EP26, 6.0.4 before 6.0.4.6, and 6.0.5 before 6.0.5.6 requires failed-login handling for web-service accounts to have the same lockout policy as for standard user accounts, which makes it easier for remote attackers to cause...

CVE-2015-0113
Published: 2015-04-27
The Jazz help system in IBM Rational Collaborative Lifecycle Management 4.0 through 5.0.2, Rational Quality Manager 4.0 through 4.0.7 and 5.0 through 5.0.2, Rational Team Concert 4.0 through 4.0.7 and 5.0 through 5.0.2, Rational Requirements Composer 4.0 through 4.0.7, Rational DOORS Next Generation...

CVE-2015-0174
Published: 2015-04-27
The SNMP implementation in IBM WebSphere Application Server (WAS) 8.5 before 8.5.5.5 does not properly handle configuration data, which allows remote authenticated users to obtain sensitive information via unspecified vectors.

CVE-2015-0175
Published: 2015-04-27
IBM WebSphere Application Server (WAS) 8.5 Liberty Profile before 8.5.5.5 does not properly implement authData elements, which allows remote authenticated users to gain privileges via unspecified vectors.

Dark Reading Radio
Archived Dark Reading Radio
Join security and risk expert John Pironti and Dark Reading Editor-in-Chief Tim Wilson for a live online discussion of the sea-changing shift in security strategy and the many ways it is affecting IT and business.