News
6/10/2010
08:55 AM
George Crump
George Crump
Commentary
Connect Directly
RSS
E-Mail
50%
50%

Implementing Storage Capacity Planning In The Modern Era

As discussed in our last entry, all the storage optimization strategies will impact how much capacity you will need to purchase in your next upgrade. The problem is that much of the savings are going to be dependent on your data. You will hear vendors state something like "your actual mileage will vary" and that is very true. With that as the backdrop how do you make sure you don't overshoot or worse, un

As discussed in our last entry, all the storage optimization strategies will impact how much capacity you will need to purchase in your next upgrade. The problem is that much of the savings are going to be dependent on your data. You will hear vendors state something like "your actual mileage will vary" and that is very true. With that as the backdrop how do you make sure you don't overshoot or worse, undershoot on your next capacity estimate?The first step is to understand which of the available optimization technologies your vendor has to offer. Very few offer all three; thin provisioning, deduplication and compression. Also some only offer the technologies on all types of storage (NAS and block). For example all three of the optimization techniques are becoming common on NAS storage and if NAS is your primary storage platform then you are in great shape. If you need block storage you may have to investigate further.

If your budgeting process will allow it, our number one recommendation is to not spend all your budget on storage upfront. Set aside as much as 20% of the budget and see what the storage optimization technologies will deliver. This will allow you to be fairly aggressive with your estimates. Just make sure that you and your supplier are ready to bring in more storage quickly. As we discussed in the last entry the amount of efficiency can vary between data types. As a general rule of thumb we find that 50% optimization is a safe bet if you are storing server virtualization images or have large home directories, again depending on the combination of optimization technologies that you choose. For example we have seen server virtualization environments experience a 90% reduction in capacity needs when thin provisioning, deduplication and compression are all used together. The set aside for additional storage is critical just in case you are too aggressive in your planning, you'll want to have pre-approval to bring in more capacity. If the optimization works as planned or even better then you can either spend the money elsewhere or enjoy a bonus (maybe) for not spending all of your IT budget.

If you cannot set aside some of the budget and you are in a "spend it or loose it" situation then I have two recommendations. First, if with even pessimistic expectations for capacity optimization your budget is going to allow you to purchase more storage than you need, consider buying a portion of that capacity as solid state disk (SSD). This will not only consume budget but as we discuss in our Visualizing SSD Readiness Guide it also provides accelerated performance for the applications that can justify it. That guide will show you how to determine which applications will benefit most from SSD.

Second and probably the worst case is you think you will need the capacity and you are in a spend it or loose it situation. My concern here is that the optimization techniques will work better than you think. With this scenario you could truly end up with shelves of unused capacity. What we suggest here is to buy the capacity as you planned but when the product comes in don't connect it all. You can even have it sitting in the rack, just don't power it on. At least this way you are not paying to power it and users don't see TB's of free capacity causing them to get sloppy in the use of that storage. Also it is much harder to deactivate a shelf since most storage systems stripe data vertically across separate shelves. It's better to not enable the shelf until you are sure you are going to need it. This again depends on a system that is easy to add capacity to with minimal or no downtime.

Regardless of your purchasing flexibility you do need to factor in these new capacity optimization techniques into your capacity plan. The more aggressive you can be the less capacity you will need. Once you have run the capacity enabled system for a few weeks and can see what kind of reduction you are going to get on your data then you can make the decision to spend it elsewhere (like SSD) or save it for the future.

Track us on Twitter: http://twitter.com/storageswiss

Subscribe to our RSS feed.

George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.

Comment  | 
Print  | 
More Insights
Register for Dark Reading Newsletters
White Papers
Flash Poll
Current Issue
Cartoon
Threat Intel Today
Threat Intel Today
The 397 respondents to our new survey buy into using intel to stay ahead of attackers: 85% say threat intelligence plays some role in their IT security strategies, and many of them subscribe to two or more third-party feeds; 10% leverage five or more.
Video
Slideshows
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2014-3341
Published: 2014-08-19
The SNMP module in Cisco NX-OS 7.0(3)N1(1) and earlier on Nexus 5000 and 6000 devices provides different error messages for invalid requests depending on whether the VLAN ID exists, which allows remote attackers to enumerate VLANs via a series of requests, aka Bug ID CSCup85616.

CVE-2014-3464
Published: 2014-08-19
The EJB invocation handler implementation in Red Hat JBossWS, as used in JBoss Enterprise Application Platform (EAP) 6.2.0 and 6.3.0, does not properly enforce the method level restrictions for outbound messages, which allows remote authenticated users to access otherwise restricted JAX-WS handlers ...

CVE-2014-3472
Published: 2014-08-19
The isCallerInRole function in SimpleSecurityManager in JBoss Application Server (AS) 7, as used in Red Hat JBoss Enterprise Application Platform (JBEAP) 6.3.0, does not properly check caller roles, which allows remote authenticated users to bypass access restrictions via unspecified vectors.

CVE-2014-3490
Published: 2014-08-19
RESTEasy 2.3.1 before 2.3.8.SP2 and 3.x before 3.0.9, as used in Red Hat JBoss Enterprise Application Platform (EAP) 6.3.0, does not disable external entities when the resteasy.document.expand.entity.references parameter is set to false, which allows remote attackers to read arbitrary files and have...

CVE-2014-3504
Published: 2014-08-19
The (1) serf_ssl_cert_issuer, (2) serf_ssl_cert_subject, and (3) serf_ssl_cert_certificate functions in Serf 0.2.0 through 1.3.x before 1.3.7 does not properly handle a NUL byte in a domain name in the subject's Common Name (CN) field of an X.509 certificate, which allows man-in-the-middle attackers...

Best of the Web
Dark Reading Radio
Archived Dark Reading Radio
Dark Reading continuing coverage of the Black Hat 2014 conference brings interviews and commentary to Dark Reading listeners.