"Eventually most, if not all, data management services currently being implemented through end-point devices (replication, de-dupe/single instance store, snapshot, antivirus scanning, archiving) need to be delivered by the network," said Kirby Wadsworth, VP of global marketing for F5 Networks.
Future data centers will virtually integrate many processing points, in many physical locations, rather than a few. "The network then becomes the only logical enforcement point for data management policies that must, by default, span locations, geographies, specific vendors, and individual technologies," Wadsworth added.
In that same vein, storage networks must develop "autonomic information infrastructures," according to Burt Kalisky, senior director in EMC's CTO office. By those, he means "information systems (computing, storage, and other orchestrated resources) that adapt themselves to meet service-level agreements automatically," among other features.
Converged Fabrics, Networks, And Protocols
This echoes the themes raised by analysts and users in the first half of this exercise. According to Jay Kidd, chief marketing officer for Network Appliance, Ethernet will eventually win out over Fibre Channel as the dominant storage network since it's cheaper to deploy and can be used for both block and file data. Ethernet also is "at the core of a virtual network, better suited to virtual machines and virtualized storage than Fibre Channel," Kidd said. "And the broad support for the Fibre Channel over Ethernet (FCoE) standard clearly establishes Ethernet as the long-term storage network of choice."
Cisco was quick to pick up on that point. By leveraging FCoE, enterprises will eventually get unified I/O and be able to consolidate the number of server I/O adapters in place, according to Rajeev Bhardwaj, director of product management at Cisco's data center business unit. He also pointed to "services-oriented SANs" that will seamlessly extend intelligent fabric applications to heterogeneous devices anywhere in the network. Intelligent storage networking for virtual servers is a related hurdle that, once cleared, will offer up capacity planning, backup, and disaster recovery for virtual servers, all of which are sorely needed.
Only one vendor raised this issue: Hewlett-Packard. "We continue to see far too much vendor lock-in, which is getting worse with some of the proprietary virtualization appliances on the market," said Patrick Eitenbichler, director of marketing at HP's StorageWorks division. The Storage Networking Industry Association's SMI-S "is moving too slow -- and isn't adopted broadly enough," Eitenbichler added.
Current costs per gigabyte also need to come into parity, since as Eitenbichler also noted, they're a bit out of whack with terabyte-level systems. "Protected storage in high-end and mid-range systems costs around $10 to $15 per GB, which is not sustainable with 1 TB priced at ~$500 in the consumer market," which is per byte, about half the cost of a GB system.
Hats off to HP for again calling attention to a challenge that, if met, doesn't necessarily have a result in the vendor's favor (unless making more systems more affordable induces more customers to buy).
NetApp's Kidd also noted the need to greatly improve the efficiency of today's storage, where the ratio of unique data to total disk space consumed is more than 10:1. "Technologies such as snapshots, thin provisioning, virtual clones, more efficient RAID, and de-duplication will help keep costs down as usable capacity grows," Kidd said. "This is especially important in a VMware environment where so much duplicate data exists on networked storage."
He also pointed to the decline in cost of flash memory, and noted that its subsequent adoption will disrupt the structure of primary storage -- these "large monolith arrays," Kidd said. Rather, enterprises will move to high performance "flash-based cache supported by large, inexpensive archives to deliver the optimal cost/performance solution," he predicted.
Tapes don't last forever -- just ask anyone at Iron Mountain -- and disks are subject to all sorts of possible problems. EMC'S Kalisky called for the establishment of "reliable digital archives that can provide information assurance and availability for 100 years or more." In parallel, separating data from the application, with the increased use of metadata for self-describing data, would help in that regard, he said.