04:57 PM
George Crump
George Crump

Which Storage Protocol Is Best For VMware?

In a recent entry in his blog, StorageTexan asks "why someone would choose to go NFS instead of doing block based connectivity for things like VSPhere?" and while I gave a brief opinion as a comment on his site, I thought I would take a little deeper dive here. Which storage protocol is best for VMware?

In a recent entry in his blog, StorageTexan asks "why someone would choose to go NFS instead of doing block based connectivity for things like VSPhere?" and while I gave a brief opinion as a comment on his site, I thought I would take a little deeper dive here. Which storage protocol is best for VMware?I have to give the storage community credit, for the most part there is not a knee-jerking response to that question anymore. In part this is due to the fact that most vendors can offer at least two different storage protocols today and in part I think its do to the fact that large majorities of people working at vendors really do have the best interest of the customer in mind when it comes to protocol selection. The first fact makes the second fact easier.

While there are some interesting alternatives on the horizon, the choice, for now, comes down to basically three protocols: iSCSI, Fibre or NFS/NAS. The reality is that in many cases the initial protocol selection comes down to what you, the customer, are most comfortable with. While fibre is the performance leader, the IP based protocols can typically be tweaked in such a way as to provide most of the benefits of the others. Although as you begin to extend the IP based protocols you run into much of the same complexity that you do with fibre because you are essentially designing a storage network that just so happens to be on an IP infrastructure.

If you are not forced to have to use specialized iSCSI HBAs or higher end Ethernet cards there should be a cost advantage for IP vs. fiber. However in server virtualization we should be greatly reducing the amount of physical servers that have to be used in the first place so that cost advantage is not as great as it may have been in non-virtualized environments. If you are virtualizing 100 servers across 10 servers that may be as few as 20 HBAs to purchase. Additionally as I wrote about in my last entry FCoE can consolidate the IP and Fibre HBAs even further.

NFS does bring some uniqueness to the equation. First virtual machines are essentially a bunch of files. NAS/NFS does thin provisioning of VMs natively and setting shared access from multiple physical hosts seems more natural with NFS. An area of concern is the ability for that single NAS head to handle the random I/O requests from potentially hundreds if not thousands of virtual machines. There is concern that this might bog down that NAS head more rapidly than a block based array would. The result could lead to NAS head sprawl as you try to deal with virtual machine sprawl.

I no longer buy into the simplicity advantage the NAS has. NAS has not become more difficult, but iSCSI and Fibre have become easier to use. iSCSI storage systems from vendors focused on the SMB market in particular have made significant ease of use improvements in setting up a shared environment. As I stated earlier as any environments leveraging any of the three protocols grows the environment becomes more complex. These are storage networks regardless of what type of cable they run on. As they scale you are going to need capabilities like VM aware QoS to be able to maintain service levels to the right applications.

While each protocol has some advantages, its what the storage suppliers are doing with their systems that seem to be the key factor for IT professionals. If they can help you with tasks like VMware storage management via integration, provisioning, data placement and data protection then these are going to be the key factors. The protocol, while important, may be the second decision point and will largely be driven by what storage system you select. It is an interesting thought at what point would you switch protocols, something we will dive into deeper in our next entry.

Track us on Twitter:

Subscribe to our RSS feed.

George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.

Comment  | 
Print  | 
More Insights
Register for Dark Reading Newsletters
White Papers
Current Issue
E-Commerce Security: What Every Enterprise Needs to Know
The mainstream use of EMV smartcards in the US has experts predicting an increase in online fraud. Organizations will need to look at new tools and processes for building better breach detection and response capabilities.
Flash Poll
10 Recommendations for Outsourcing Security
10 Recommendations for Outsourcing Security
Enterprises today have a wide range of third-party options to help improve their defenses, including MSSPs, auditing and penetration testing, and DDoS protection. But are there situations in which a service provider might actually increase risk?
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
Published: 2015-10-15
The Direct Rendering Manager (DRM) subsystem in the Linux kernel through 4.x mishandles requests for Graphics Execution Manager (GEM) objects, which allows context-dependent attackers to cause a denial of service (memory consumption) via an application that processes graphics data, as demonstrated b...

Published: 2015-10-15
netstat in IBM AIX 5.3, 6.1, and 7.1 and VIOS 2.2.x, when a fibre channel adapter is used, allows local users to gain privileges via unspecified vectors.

Published: 2015-10-15
Cross-site request forgery (CSRF) vulnerability in eXtplorer before 2.1.8 allows remote attackers to hijack the authentication of arbitrary users for requests that execute PHP code.

Published: 2015-10-15
Directory traversal vulnerability in QNAP QTS before 4.1.4 build 0910 and 4.2.x before 4.2.0 RC2 build 0910, when AFP is enabled, allows remote attackers to read or write to arbitrary files by leveraging access to an OS X (1) user or (2) guest account.

Published: 2015-10-15
Cisco Application Policy Infrastructure Controller (APIC) 1.1j allows local users to gain privileges via vectors involving addition of an SSH key, aka Bug ID CSCuw46076.

Dark Reading Radio