On the storage side, often called the target, there needs to be something that either converts iSCSI to some other protocol like fibre or SCSI, but more commonly today the storage system itself is native iSCSI. When configuring the devices the iSCSI agent will query the iSCSI storage system for a list of available volumes and then you can select which volume should be assigned to that server.
All of this connectivity then happens with Ethernet as the interconnecting infrastructure. Which today, especially in the 1GbE form, can be had for very little investment. Most businesses and data centers will be learning how to interconnect via an Ethernet infrastructure long before they need storage interconnectivity. iSCSI's theoretical advantage then is that when the time comes for shared storage, the IT staff already knows the infrastructure part and half the job is complete. Now they just need to learn storage.
While iSCSI does have its advantages is also has some potential unknowns that need to either be worked around or avoided. The first is that this is a real storage protocol and needs to be treated like one. That means it really should be on its own network either physically or logically. Otherwise storage traffic can congest the standard network and cause performance or reliability issues. Having storage on its own network makes it easier to diagnose problems with either network.
iSCSI may start out simple but as it scales it can become challenging. Fine tuning an IP network for maximum performance requires experience and understanding. Care must be taken when selecting ethernet cards and switches to make sure that they can support the full speed that you are implementing. Many low end switches for example, are not designed to have all or even most of the ports running full speed at the same time. They are counting on bandwidth use being random between ports and only a few needing full speed at any point in time. The problem is though flooding all available ports with traffic is entirely possible in a storage environment. For example when doing a backup from servers with iSCSI attached servers to an iSCSI attached disk backup. Keeping these networks separate and making sure the components will support a fully active data path are critical.
Performance is another scaling concern. Most iSCSI storage environments are still 1GbE based, even newer ones. 1GbE is more readily available and costs, usually a key iSCSI motivator, are significantly less expensive than the 10GbE alternative. For some, especially smaller environments, 1GbE is all the storage I/O they will ever need most of the time. For others they will look at using multiple 1GbE connections from the servers to increase performance or they will look at 10GbE. In the multi-1GbE configurations make sure that your iSCSI initiator will support that configuration and you don't see a big performance drop-off going to the second interface card. Also see if those cards can be used in an active-active fashion not only as a failover. If you decide to invest in 10GbE make sure that everything else in the environment can keep up with the 10GbE connection. Many environments have trouble getting full line speed performance out of a 10GbE connection and end up only being able to use 30 to 40% of available bandwidth.
iSCSI has its roll to play in the enterprise and in the SMB, it can drive down costs but does have some limitations that can be worked around or avoided. Knowing these will help you make the right protocol selection for your shared storage environment.
Track us on Twitter: http://twitter.com/storageswiss
Subscribe to our RSS feed.
George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.