News
6/10/2010
08:55 AM
George Crump
George Crump
Commentary
50%
50%

Implementing Storage Capacity Planning In The Modern Era

As discussed in our last entry, all the storage optimization strategies will impact how much capacity you will need to purchase in your next upgrade. The problem is that much of the savings are going to be dependent on your data. You will hear vendors state something like "your actual mileage will vary" and that is very true. With that as the backdrop how do you make sure you don't overshoot or worse, un

As discussed in our last entry, all the storage optimization strategies will impact how much capacity you will need to purchase in your next upgrade. The problem is that much of the savings are going to be dependent on your data. You will hear vendors state something like "your actual mileage will vary" and that is very true. With that as the backdrop how do you make sure you don't overshoot or worse, undershoot on your next capacity estimate?The first step is to understand which of the available optimization technologies your vendor has to offer. Very few offer all three; thin provisioning, deduplication and compression. Also some only offer the technologies on all types of storage (NAS and block). For example all three of the optimization techniques are becoming common on NAS storage and if NAS is your primary storage platform then you are in great shape. If you need block storage you may have to investigate further.

If your budgeting process will allow it, our number one recommendation is to not spend all your budget on storage upfront. Set aside as much as 20% of the budget and see what the storage optimization technologies will deliver. This will allow you to be fairly aggressive with your estimates. Just make sure that you and your supplier are ready to bring in more storage quickly. As we discussed in the last entry the amount of efficiency can vary between data types. As a general rule of thumb we find that 50% optimization is a safe bet if you are storing server virtualization images or have large home directories, again depending on the combination of optimization technologies that you choose. For example we have seen server virtualization environments experience a 90% reduction in capacity needs when thin provisioning, deduplication and compression are all used together. The set aside for additional storage is critical just in case you are too aggressive in your planning, you'll want to have pre-approval to bring in more capacity. If the optimization works as planned or even better then you can either spend the money elsewhere or enjoy a bonus (maybe) for not spending all of your IT budget.

If you cannot set aside some of the budget and you are in a "spend it or loose it" situation then I have two recommendations. First, if with even pessimistic expectations for capacity optimization your budget is going to allow you to purchase more storage than you need, consider buying a portion of that capacity as solid state disk (SSD). This will not only consume budget but as we discuss in our Visualizing SSD Readiness Guide it also provides accelerated performance for the applications that can justify it. That guide will show you how to determine which applications will benefit most from SSD.

Second and probably the worst case is you think you will need the capacity and you are in a spend it or loose it situation. My concern here is that the optimization techniques will work better than you think. With this scenario you could truly end up with shelves of unused capacity. What we suggest here is to buy the capacity as you planned but when the product comes in don't connect it all. You can even have it sitting in the rack, just don't power it on. At least this way you are not paying to power it and users don't see TB's of free capacity causing them to get sloppy in the use of that storage. Also it is much harder to deactivate a shelf since most storage systems stripe data vertically across separate shelves. It's better to not enable the shelf until you are sure you are going to need it. This again depends on a system that is easy to add capacity to with minimal or no downtime.

Regardless of your purchasing flexibility you do need to factor in these new capacity optimization techniques into your capacity plan. The more aggressive you can be the less capacity you will need. Once you have run the capacity enabled system for a few weeks and can see what kind of reduction you are going to get on your data then you can make the decision to spend it elsewhere (like SSD) or save it for the future.

Track us on Twitter: http://twitter.com/storageswiss

Subscribe to our RSS feed.

George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.

Comment  | 
Print  | 
More Insights
Comments
Oldest First  |  Newest First  |  Threaded View
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Write a Caption, Win a Starbucks Card! Click Here
Latest Comment: This comment is waiting for review by our moderators.
Current Issue
Security Operations and IT Operations: Finding the Path to Collaboration
A wide gulf has emerged between SOC and NOC teams that's keeping both of them from assuring the confidentiality, integrity, and availability of IT systems. Here's how experts think it should be bridged.
Flash Poll
New Best Practices for Secure App Development
New Best Practices for Secure App Development
The transition from DevOps to SecDevOps is combining with the move toward cloud computing to create new challenges - and new opportunities - for the information security team. Download this report, to learn about the new best practices for secure application development.
Slideshows
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2017-0290
Published: 2017-05-09
NScript in mpengine in Microsoft Malware Protection Engine with Engine Version before 1.1.13704.0, as used in Windows Defender and other products, allows remote attackers to execute arbitrary code or cause a denial of service (type confusion and application crash) via crafted JavaScript code within ...

CVE-2016-10369
Published: 2017-05-08
unixsocket.c in lxterminal through 0.3.0 insecurely uses /tmp for a socket file, allowing a local user to cause a denial of service (preventing terminal launch), or possibly have other impact (bypassing terminal access control).

CVE-2016-8202
Published: 2017-05-08
A privilege escalation vulnerability in Brocade Fibre Channel SAN products running Brocade Fabric OS (FOS) releases earlier than v7.4.1d and v8.0.1b could allow an authenticated attacker to elevate the privileges of user accounts accessing the system via command line interface. With affected version...

CVE-2016-8209
Published: 2017-05-08
Improper checks for unusual or exceptional conditions in Brocade NetIron 05.8.00 and later releases up to and including 06.1.00, when the Management Module is continuously scanned on port 22, may allow attackers to cause a denial of service (crash and reload) of the management module.

CVE-2017-0890
Published: 2017-05-08
Nextcloud Server before 11.0.3 is vulnerable to an inadequate escaping leading to a XSS vulnerability in the search module. To be exploitable a user has to write or paste malicious content into the search dialogue.

Dark Reading Radio
Archived Dark Reading Radio
In past years, security researchers have discovered ways to hack cars, medical devices, automated teller machines, and many other targets. Dark Reading Executive Editor Kelly Jackson Higgins hosts researcher Samy Kamkar and Levi Gundert, vice president of threat intelligence at Recorded Future, to discuss some of 2016's most unusual and creative hacks by white hats, and what these new vulnerabilities might mean for the coming year.