The storage infrastructure that supports a virtualized server environment can quickly become a roadblock to expansion. As the project grows, server virtualization places new performance and scaling demands on storage that many IT professionals have not had to deal with in the past. In this entry we will cover some of the causes of the problems and in upcoming entries we will discuss how to overcome those problems.

George Crump, President, Storage Switzerland

March 15, 2011

3 Min Read

The storage infrastructure that supports a virtualized server environment can quickly become a roadblock to expansion. As the project grows, server virtualization places new performance and scaling demands on storage that many IT professionals have not had to deal with in the past. In this entry we will cover some of the causes of the problems and in upcoming entries we will discuss how to overcome those problems.The first problem that server virtualization causes the storage environment is typically an increase in I/O demands per physical server connecting to that storage. Before virtualization most servers ran one application and there was often plenty of processing and storage I/O resources to go around. In server virtualization we are stacking many applications on to a single host, each running in their own virtual server. We are multiplying the potential storage I/O by 10X or more of what it used to be.

The second problem is that this I/O is now heavily randomized. All these applications are acting on their own behalf with little or any knowledge of the applications running in the virtual machines on the physical host that they share. Seldom will one application check to see if another application is busy with storage traffic. It becomes the job of the virtualization software's hypervisor to broker the available bandwidth which for the most part it does on a round robin basis. Fine tuning this brokering is something we will dive into in an upcoming entry. Finally when there was a performance problem, troubleshooting that problem was somewhat simplified in the physical world because we had that performance issue isolated to a single server and it was on its path to storage. Often that server was the only server accessing that area of storage. We didn't want other physical servers accessing the same storage area. With server virtualization, the storage I/O resources are shared across not only multiple virtual machines on a single physical host but also multiple physical hosts are accessing the same storage areas so that features like virtual machine migration can take place.

Beyond performance there is also new demands placed on scalability. Scaling in this context is not only actual storage capacity but also I/O capacity. While various cloning and deduplication technologies help storage capacity efficiency in virtualized server environments, rapid VM adoption and poor template control can still cause storage capacity issues. The bigger storage challenge may be adding that capacity without interrupting service. In the shared everything world of server virtualization any downtime, like one for capacity upgrade, has an exponential impact.

The other aspect of capacity is understanding the I/O capacity available when you want to virtualize another server. This could be a new server or a legacy physical server. In either case you have to know where the best place to put that server might be. Understanding which physical host has the most available CPU and storage I/O resources become critical to confidently placing that next virtual machine.

As you look to increase virtual machine density or look to virtualize more mission critical applications the limitations of storage performance and the complexity of getting around those limitations cause virtualization projects to stall. There are two basic options to solve this problem. You can either make the whole environment go faster or you can fine tune it. In an upcoming webinar and in our next entry we will cover the go faster option.

Track us on Twitter: http://twitter.com/storageswiss

Subscribe to our RSS feed.

George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.

About the Author(s)

George Crump

President, Storage Switzerland

George Crump is president and founder of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. With 25 years of experience designing storage solutions for datacenters across the US, he has seen the birth of such technologies as RAID, NAS, and SAN. Prior to founding Storage Switzerland, he was CTO at one the nation’s largest storage integrators, where he was in charge of technology testing, integration, and product selection. George is responsible for the storage blog on InformationWeek's website and is a regular contributor to publications such as Byte and Switch, SearchStorage, eWeek, SearchServerVirtualizaiton, and SearchDataBackup.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights