Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.
Virtual desktop environments are different than virtual server environments when discussing performance. To the virtual desktop environment we need to be able to provide acceptable performance consistent, but moderate, performance throughout the day to a set of endpoints (desktops and laptops) that have similar I/O patterns. This is different than server virtualization which has highly random I/O patterns and needs very high performance at peak moments throughout the day.
September 21, 2010
4 Min Read
Virtual desktop environments are different than virtual server environments when discussing performance. To the virtual desktop environment we need to be able to provide acceptable performance consistent, but moderate, performance throughout the day to a set of endpoints (desktops and laptops) that have similar I/O patterns. This is different than server virtualization which has highly random I/O patterns and needs very high performance at peak moments throughout the day.As we will discuss in our upcoming webinar "Making Sure Desktop Virtualization Won't Break Storage" a disadvantage that virtual desktop projects have is developing a scale or simulation of what that storage I/O load is going to look like. The server virtualization project manager can at least monitor the I/O patterns of the physical servers, especially if they were on shared storage already, and use those as a baseline. It is very challenging to analyze the I/O patterns of potentially 1,000's of individual user's desktops.
Virtual desktop actually has two areas of performance concern. The first, boot storms, which gets most of the attention but the second can be as equally important, providing consistent performance as the environment scales. A boot storm creates a large storage I/O need when many users first log in to their systems at the beginning of the day or during a shift change. Other examples are logout at the end of a shift or virus update; essentially it is when a forced action causes a majority of the desktops to need storage I/O at nearly the same time.
The increasingly universal solution to the boot storm problem is to turn to solid state technology. There are some storage solutions that can provide very impressive performance numbers from straight disk so make sure you compare both options. The challenge with turning to solid state disk is that you have now decided to move what was your least expensive storage, desktop and laptop hard drives, and put it on the most expensive, shared solid state disk. To make matters worse the use of solid state storage can only be justified during specific boot storm points. Ideally you would want to use that solid state capacity elsewhere when it is not needed for the boot storm.
This periodic need can be an ideal use of solid state storage as a large cache. While some storage systems can have their internal large cache expanded it comes at limited capacity and very high cost because they are DRAM based. Leveraging solid state disk as a caching area for virtual desktops can be an ideal alternative. At a more affordable price than DRAM, it provides performance for the virtual desktop environment when the boot storms occur but then can be leveraged by the rest of the environment when they are not. Also because of some of the optimization techniques that we will discuss in an upcoming entry, much of the virtual desktop data set can often be loaded into this cache area during the boot storm. The only issue with caching is trying to pre-load it. Most often a cache load only happens when the data area is first accessed. Once loaded though it should provide the users with ideal performance during the boot storm process.
Another option could be to use an automated tiering technique that some of the vendors are providing, and some include the ability to pre-stage certain data sets at certain times during the day. Here you have to be careful that the automated tiering solution can provide enough granularity to not have the movement of data in and out of the solid state tier cause of its own performance penalty because of its size.
The final option is a solid state disk only area. Again with the optimization that is often found in virtual desktop packages the size of the capacity investment may be relatively small and the other portion of the solid state disk could be used for other data sets. Migration of live virtual desktop images between storage areas supported by a few virtual desktop software suppliers so pre-loading and unloading either through scripting or manually is a possibility. Unlike caching and automated tiering, the solid state area may need special incorporation into the data protection process.
In our next entry we will cover the second performance challenge; maintaining performance as the virtual desktop environment grows.
Track us on Twitter: http://twitter.com/storageswiss
Subscribe to our RSS feed.
George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.
About the Author(s)
President, Storage Switzerland
George Crump is president and founder of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. With 25 years of experience designing storage solutions for datacenters across the US, he has seen the birth of such technologies as RAID, NAS, and SAN. Prior to founding Storage Switzerland, he was CTO at one the nation’s largest storage integrators, where he was in charge of technology testing, integration, and product selection. George is responsible for the storage blog on InformationWeek's website and is a regular contributor to publications such as Byte and Switch, SearchStorage, eWeek, SearchServerVirtualizaiton, and SearchDataBackup.
You May Also Like
A screen displaying many different types of charts and graphs to show what data is being analyzed.Cybersecurity Analytics