One of the biggest challenges to expanding a virtual server infrastructure is dealing with the storage challenges that often come with the deployment. The way storage is used in the virtual infrastructure is unlike most use cases. In this environment we want the same storage area to be accessed by almost every connecting server and each of those servers may have dozens of workloads trying to access that storage at the same time.

George Crump, President, Storage Switzerland

August 19, 2010

3 Min Read

One of the biggest challenges to expanding a virtual server infrastructure is dealing with the storage challenges that often come with the deployment. The way storage is used in the virtual infrastructure is unlike most use cases. In this environment we want the same storage area to be accessed by almost every connecting server and each of those servers may have dozens of workloads trying to access that storage at the same time.When deciding what storage is best for server virtualization many vendors try to isolate down to a single capability that their storage happens to have that makes their product the best storage platform for the virtualized environment. One area of this discussion is often on what protocol to use. Your choices typically are iSCSI, Fibre Channel or NAS. In addition there are newer options coming out, shared SAS and ATA over Ethernet (AoE) are two examples. The choice you make can allow for that shared everything world to be more easily implemented. As we discuss in our recent article "VMware Storage Simplification Strategies" protocol selection is important and depending on the environment may be a key issue for consideration, but the protocol is just one of the decision points.

The capabilities of the system itself are critical. There are two capabilities to look for here. The first is its ability to adapt and scale to meet the unpredictable workload demands and rapid growth that is typical to virtualized environments. Almost every virtualized server environment we have been involved with sees virtual server count grow rapidly and often well beyond the initial projections. This impacts storage by requiring more capacity and I/O bandwidth. An ideal storage selection should be able to handle that growth without having to force you into a premature upgrade or the addition of a second storage platform. Either way life will become more complicated.

The second capability is how simple does the storage system itself make things? You can select a protocol that you are comfortable with, and a system that can handle all of the server growth you can possibly foresee, but if the system does not make the management of storage easy it will cause problems for you in the environment. This goes beyond the data services that most people immediately think of; snapshots, replication etc... Your storage systems will either have them or not, and if it doesn't the hypervisor can handle it.

The storage system can make life easy for you by handling key storage management challenges not just data services. For example how easy is it to add additional storage capacity? This is critical in a virtual environment since the likelihood of needing more capacity is high. Also this is more than just how do you connect that new storage but how hard is it to make that storage available to existing volumes or to map it into the virtual infrastructure? Another example is how does the storage system allow you to manage different classes of storage and map those to the appropriate virtual machines? Is there an automated data movement function or is there a simple way to identify virtual machines that need more or less performance and move those to the corresponding tier?

Selecting the right storage strategy for virtual environments is more than picking the right protocol. The capabilities of the storage system itself play an important role in deciding if you will be buried in storage management tasks or be free to focus on other responsibilities.

Track us on Twitter: http://twitter.com/storageswiss

Subscribe to our RSS feed.

George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.

About the Author(s)

George Crump

President, Storage Switzerland

George Crump is president and founder of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. With 25 years of experience designing storage solutions for datacenters across the US, he has seen the birth of such technologies as RAID, NAS, and SAN. Prior to founding Storage Switzerland, he was CTO at one the nation’s largest storage integrators, where he was in charge of technology testing, integration, and product selection. George is responsible for the storage blog on InformationWeek's website and is a regular contributor to publications such as Byte and Switch, SearchStorage, eWeek, SearchServerVirtualizaiton, and SearchDataBackup.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights