Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

News

10/1/2010
10:55 AM
George Crump
George Crump
Commentary
50%
50%

Desktop Virtualization And Storage - Dealing With The Cost

In this entry we continue our series on Desktop Virtualization and the challenges it creates for the storage infrastructure. Today we focus on the problem of cost. As I mentioned in the opening entry of this series, cost is always a big concern. You are taking your least expensive storage, desktops and laptops, and more than likely putting it on your most expensive, shared storage. How do you keep cost

In this entry we continue our series on Desktop Virtualization and the challenges it creates for the storage infrastructure. Today we focus on the problem of cost. As I mentioned in the opening entry of this series, cost is always a big concern. You are taking your least expensive storage, desktops and laptops, and more than likely putting it on your most expensive, shared storage. How do you keep costs under control?It should come as no surprise that in our recent webinar "Making Sure Desktop Virtualization Won't Break Storage" over 67% of respondents indicated that keeping costs under control was their number one concern. It is especially important when you consider the general lean toward using solid state storage to address boot storms as we discussed in our boot storms entry. The good news is there are plenty of capabilities in the desktop virtualization products themselves and the storage you will run it on to help curtail cost issues.

First, most desktop virtualization software and storage systems are able to thinly provision volumes, meaning that you don't have to allocate all the potential capacity that may eventually be needed, just the actual initial storage. Thin provisioning keeps capacity utilization high and frees storage that is not in use but assigned to and held captive by a server. This is a critical function because this captive free space storage can not be optimized by the compression and deduplication techniques that we will discuss in a moment. Additionally the amount of virtual desktop storage that is going to be needed is often difficult to predict, since so many optimization techniques will be applied. Having that space allocation dynamically eases this burden.

The second is the ability to create a master image of the desktops to be virtualized. In many cases hundreds of desktops can be stored in a single image. This capability can also be performed by the storage system, often called cloning or writeable snapshots. You'll need to compare which is the most efficient from a space utilization and performance impact standpoint. An advantage of using master images is that now a hundred desktops can boot from a single image that can easily fit on to the solid state storage area that we discuss to deal with boot storms.

Despite the stringent use of master images capacity growth in virtualized desktop environment will occur as those virtual desktops are personalized and as data that used to be stored on local C drives is now stored on the SAN. This is were data optimization techniques like deduplication and/or compression come in. They are essential to completing the goal of driving out cost in desktop virtualization storage. Deduplication is especially effective in virtual environments because of the likelihood of redundant data.

The combination of all of these techniques can lead to a capacity requirement reduction of as much as 90%. This reduction not only makes it easier by justifying the use of a small solid state area to handle boot storms easier, it also justifies the use of the corporate SAN, which makes for an easier task of integrating virtual desktops into the data protection process. Protecting the virtual desktop environment is the subject of our next entry in this series.

Track us on Twitter: http://twitter.com/storageswiss

Subscribe to our RSS feed.

George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Why Cyber-Risk Is a C-Suite Issue
Marc Wilczek, Digital Strategist & CIO Advisor,  11/12/2019
Black Hat Q&A: Hacking a '90s Sports Car
Black Hat Staff, ,  11/7/2019
The Cold Truth about Cyber Insurance
Chris Kennedy, CISO & VP Customer Success, AttackIQ,  11/7/2019
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Current Issue
7 Threats & Disruptive Forces Changing the Face of Cybersecurity
This Dark Reading Tech Digest gives an in-depth look at the biggest emerging threats and disruptive forces that are changing the face of cybersecurity today.
Flash Poll
Rethinking Enterprise Data Defense
Rethinking Enterprise Data Defense
Frustrated with recurring intrusions and breaches, cybersecurity professionals are questioning some of the industrys conventional wisdom. Heres a look at what theyre thinking about.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2019-16863
PUBLISHED: 2019-11-14
STMicroelectronics ST33TPHF2ESPI TPM devices before 2019-09-12 allow attackers to extract the ECDSA private key via a side-channel timing attack because ECDSA scalar multiplication is mishandled, aka TPM-FAIL.
CVE-2019-18949
PUBLISHED: 2019-11-14
SnowHaze before 2.6.6 is sometimes too late to honor a per-site JavaScript blocking setting, which leads to unintended JavaScript execution via a chain of webpage redirections targeted to the user's browser configuration.
CVE-2011-1930
PUBLISHED: 2019-11-14
In klibc 1.5.20 and 1.5.21, the DHCP options written by ipconfig to /tmp/net-$DEVICE.conf are not properly escaped. This may allow a remote attacker to send a specially crafted DHCP reply which could execute arbitrary code with the privileges of any process which sources DHCP options.
CVE-2011-1145
PUBLISHED: 2019-11-14
The SQLDriverConnect() function in unixODBC before 2.2.14p2 have a possible buffer overflow condition when specifying a large value for SAVEFILE parameter in the connection string.
CVE-2011-1488
PUBLISHED: 2019-11-14
A memory leak in rsyslog before 5.7.6 was found in the way deamon processed log messages are logged when $RepeatedMsgReduction was enabled. A local attacker could use this flaw to cause a denial of the rsyslogd daemon service by crashing the service via a sequence of repeated log messages sent withi...