informa
3 min read
article

Desktop Virtualization And Storage - Dealing With The Cost

In this entry we continue our series on Desktop Virtualization and the challenges it creates for the storage infrastructure. Today we focus on the problem of cost. As I mentioned in the opening entry of this series, cost is always a big concern. You are taking your least expensive storage, desktops and laptops, and more than likely putting it on your most expensive, shared storage. How do you keep cost
In this entry we continue our series on Desktop Virtualization and the challenges it creates for the storage infrastructure. Today we focus on the problem of cost. As I mentioned in the opening entry of this series, cost is always a big concern. You are taking your least expensive storage, desktops and laptops, and more than likely putting it on your most expensive, shared storage. How do you keep costs under control?It should come as no surprise that in our recent webinar "Making Sure Desktop Virtualization Won't Break Storage" over 67% of respondents indicated that keeping costs under control was their number one concern. It is especially important when you consider the general lean toward using solid state storage to address boot storms as we discussed in our boot storms entry. The good news is there are plenty of capabilities in the desktop virtualization products themselves and the storage you will run it on to help curtail cost issues.

First, most desktop virtualization software and storage systems are able to thinly provision volumes, meaning that you don't have to allocate all the potential capacity that may eventually be needed, just the actual initial storage. Thin provisioning keeps capacity utilization high and frees storage that is not in use but assigned to and held captive by a server. This is a critical function because this captive free space storage can not be optimized by the compression and deduplication techniques that we will discuss in a moment. Additionally the amount of virtual desktop storage that is going to be needed is often difficult to predict, since so many optimization techniques will be applied. Having that space allocation dynamically eases this burden.

The second is the ability to create a master image of the desktops to be virtualized. In many cases hundreds of desktops can be stored in a single image. This capability can also be performed by the storage system, often called cloning or writeable snapshots. You'll need to compare which is the most efficient from a space utilization and performance impact standpoint. An advantage of using master images is that now a hundred desktops can boot from a single image that can easily fit on to the solid state storage area that we discuss to deal with boot storms.

Despite the stringent use of master images capacity growth in virtualized desktop environment will occur as those virtual desktops are personalized and as data that used to be stored on local C drives is now stored on the SAN. This is were data optimization techniques like deduplication and/or compression come in. They are essential to completing the goal of driving out cost in desktop virtualization storage. Deduplication is especially effective in virtual environments because of the likelihood of redundant data.

The combination of all of these techniques can lead to a capacity requirement reduction of as much as 90%. This reduction not only makes it easier by justifying the use of a small solid state area to handle boot storms easier, it also justifies the use of the corporate SAN, which makes for an easier task of integrating virtual desktops into the data protection process. Protecting the virtual desktop environment is the subject of our next entry in this series.

Track us on Twitter: http://twitter.com/storageswiss

Subscribe to our RSS feed.

George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.