Up To The Challenge?

Mask complexity, improve performance, and automate every last function possible -- those, in a giant nutshell, are the biggest engineering challenges for storage in the next several years, according to some big thinkers who've deployed a SAN or two in their time.

4 Min Read

Mask complexity, improve performance, and automate every last function possible -- those, in a giant nutshell, are the biggest engineering challenges for storage in the next several years, according to some big thinkers who've deployed a SAN or two in their time.A few days ago, I mentioned the list drawn up by the National Academy of Engineering of the largest technical challenges of the next century. It got me wondering how storage experts whose opinions I trust might answer where this sector of IT's concerned. I also reached out to a few vendors to see where they are with their thinking. I'll post their responses next week.

In the meantime, here's what a power user, two long-time storage consultants, and a financial analyst identified as storage's biggest challenges. Everyone -- vendor and nonvendor alike -- cited the challenge of "greening up" storage -- reducing its carbon footprint and powering requirements.

Unified storage: A truly open, usable, standardized, storage 'grid' capability out-of-the-box for enterprise environments is the biggest desired technology breakthrough in the storage realm. It would be a marriage of virtual file systems, storage virtualization technologies, tiered storage mechanisms for attaining information life cycle management (ILM), dynamic on-demand provisioning, data de-duplication, and have built-in replication and DR (redundancy) capabilities as well as all the essential capabilities for data caching and levels of RAID to meet performance requirements. It would be a self-healing and re-routing infrastructure, distributed across multiple geographical locations (data centers) with enough intelligence to recover from any type of issue with minimal human intervention required (if at all).

-- Harold Shapiro, senior VP and CIO of Indieroad.net

Hardware and software interoperability: Meaning any storage management software product can run on any server/OS and manage any vendor's product/device. Disk formats are still unique for each OS! As of today, proprietary systems are still winning the war -- certain software only works with specific hardware. IT, and specifically the storage industry, along with the SNIA now, have been chasing true plug-and-play interoperability for nearly 20 years and it's still not there. The goal is for a single backup/recovery product to support the storage hardware on Unix, Linux, and Windows without having a specific backup/recovery product for each OS with different GUIs, screens etc.

-- Fred Moore, president, Horison Inc.

I/O aggregation and virtualization: Storage is very costly, because the protocols are all different for data, for storage, or for inter-server communication, requiring different adapters, cables, and technologies. If there were one device on top of the rack or in the blade chassis that aggregates all the I/O and directs it to the target devices, it would reduce overall cost significantly.

-- Kaushik Roy, research analyst, data center technologies, Pacific Growth Equities LLC

Improved price/performance: Make storage drives faster and cheaper, perhaps using single-cell flash solid state disks (SSDs) or some other new technology. Make storage systems faster in general -- storage performance has not kept up with capacity increases.

-- Marc Staimer, president, Dragonslayer Consulting

Mainframe-like management features: The storage and data management capabilities for Unix, Linux, and Windows may be 15 years to 20 years behind that of the mainframe. Specifically, the mainframe has the most powerful policy-based storage management engine ever developed, called DFSMS. This software (though not perfect) implements data classification, allows user policies to initiate proactive tasks such as optimized data placement, automatic backup/recovery, and HSM (Hierarchical Storage Management), which is key for effectively implementing tiered storage. Nonmainframe systems software suppliers are trying hard to offer mainframe-like functionality. For example, nonmainframe systems are just now implementing thin provisioning. This first appeared on a mainframe in 1965 with OS/360. HSM first appeared in 1975 and there is still no effective cross-platform HSM for Unix, Linux, and Windows.

-- Fred Moore

Fibre Channel over Ethernet (FCoE): What if a new type of enhanced Ethernet (aka Data-Center Ethernet or DCE) which utilizes Ethernet extensions to achieve the reliability and efficiency of Fibre Channel can be designed? If this can be done, then it would be possible to encapsulate Fibre Channel data within the Ethernet frame and thus meld storage, messaging, VoIP, video, and other data over the same "unified" physical network.

-- Kaushik Roy

Read more about:

2008

About the Author(s)

Terry Sweeney, Contributing Editor

Terry Sweeney is a Los Angeles-based writer and editor who has covered technology, networking, and security for more than 20 years. He was part of the team that started Dark Reading and has been a contributor to The Washington Post, Crain's New York Business, Red Herring, Network World, InformationWeek and Mobile Sports Report.

In addition to information security, Sweeney has written extensively about cloud computing, wireless technologies, storage networking, and analytics. After watching successive waves of technological advancement, he still prefers to chronicle the actual application of these breakthroughs by businesses and public sector organizations.


Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights