NFS didn't need to be saved, but because of VMware its use has been broadened beyond the traditional Unix implementations. Instead of creating a LUN for each VMware Virtual Disk (VMDK), with NFS you manage multiple VMDK files on a single NFS Volume. This makes sense because VMDK's are files, not actual disks.NFS stands for Network File System. It is a client/server system that allows users to access files across a network and treat them as if they reside in a local file directory. This is accomplished through the processes of exporting (the process by which an NFS server provides remote clients with access to its files) and mounting (the process by which file systems are made available to the operating system and the user). It is primarily used in Unix to Unix file sharing scenarios.
Even if all your VM's are Windows based, NFS is still an option for you. While it is true that Windows can't boot NFS, VMware built NFS into their disk virtualization layer so that Windows doesn't have to. NFS works so well I believe we will see a reversal in implementation strategies over the next few years and that NFS, not Fibre Channel, will be the dominant storage deployment method.
To be clear, NFS isn't the only protocol, and there are some cases where it isn't a good fit. Microsoft Cluster Services must have block access, for example, and there are some cases where the raw performance of Fibre Channel is required. iSCSI has a few unique abilities, one being the ability to assign a logical unit number directly to a Guest OS as opposed to going through the VMware disk virtualization layer. This provides abilities to move specific LUN's quickly out of the VMware environment.
Why NFS? It makes life so much easier for the storage and VMware administrator and in many VMware environments there is little, if any, performance penalty. With the exception of storage manufacturers that provide virtualized solutions, LUN management is a huge challenge for the VMware/storage administrator. With an NFS implementation, you're working with a single file system provisioning additional VMware images. You pick up access control through the built-in NFS security, allowing you to provision an NFS file system to a group of VMware managers and not need to micromanage each LUN. Imagine grouping all your VMware images in folders by application type.
Lastly, your access path is now over traditional Ethernet, which obviously drives cost down but, more important, makes troubleshooting easier since most organizations are deeper in IP knowledge than they are in Fibre knowledge.
The big gain is access. All the ESX servers can get to the single mount point. VMotion still makes me say "Wow!" when I see it in use. In Fibre Channel deployments, ideally each ESX server has to see every other ESX server's LUNS. This can be very difficult to configure and manage. NFS is a sharing technology, so all of this shared access is built in. VMotion becomes easy.
To make this work well, you need a real NAS solution that can scale both from an I/O performance perspective as well as a capacity perspective. Network Appliance had been out in front promoting VMware over NFS. However, now solutions from OnStor and EMC are offering similar capabilities.
In the end, simple always wins. As a result, in time VMware over NFS will become the predominant protocol for VMware deployments because it makes life easier for VMware and storage administrators. VMware also amortizes the usefulness of a NAS investment beyond the classic file server and now can become the heart of the VMware infrastructure.
George Crump is founder of Storage Switzerland, an analyst firm focused on the virtualization and storage marketplaces. It provides strategic consulting and analysis to storage users, suppliers, and integrators. An industry veteran of more than 25 years, Crump has held engineering and sales positions at various IT industry manufacturers and integrators. Prior to Storage Switzerland, he was CTO at one of the nation's largest integrators.