Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

News

2/23/2008
12:42 AM
Terry Sweeney
Terry Sweeney
Commentary
Connect Directly
Facebook
Twitter
RSS
E-Mail
50%
50%

Up To The Challenge?

Mask complexity, improve performance, and automate every last function possible -- those, in a giant nutshell, are the biggest engineering challenges for storage in the next several years, according to some big thinkers who've deployed a SAN or two in their time.

Mask complexity, improve performance, and automate every last function possible -- those, in a giant nutshell, are the biggest engineering challenges for storage in the next several years, according to some big thinkers who've deployed a SAN or two in their time.A few days ago, I mentioned the list drawn up by the National Academy of Engineering of the largest technical challenges of the next century. It got me wondering how storage experts whose opinions I trust might answer where this sector of IT's concerned. I also reached out to a few vendors to see where they are with their thinking. I'll post their responses next week.

In the meantime, here's what a power user, two long-time storage consultants, and a financial analyst identified as storage's biggest challenges. Everyone -- vendor and nonvendor alike -- cited the challenge of "greening up" storage -- reducing its carbon footprint and powering requirements.

Unified storage: A truly open, usable, standardized, storage 'grid' capability out-of-the-box for enterprise environments is the biggest desired technology breakthrough in the storage realm. It would be a marriage of virtual file systems, storage virtualization technologies, tiered storage mechanisms for attaining information life cycle management (ILM), dynamic on-demand provisioning, data de-duplication, and have built-in replication and DR (redundancy) capabilities as well as all the essential capabilities for data caching and levels of RAID to meet performance requirements. It would be a self-healing and re-routing infrastructure, distributed across multiple geographical locations (data centers) with enough intelligence to recover from any type of issue with minimal human intervention required (if at all).

-- Harold Shapiro, senior VP and CIO of Indieroad.net

Hardware and software interoperability: Meaning any storage management software product can run on any server/OS and manage any vendor's product/device. Disk formats are still unique for each OS! As of today, proprietary systems are still winning the war -- certain software only works with specific hardware. IT, and specifically the storage industry, along with the SNIA now, have been chasing true plug-and-play interoperability for nearly 20 years and it's still not there. The goal is for a single backup/recovery product to support the storage hardware on Unix, Linux, and Windows without having a specific backup/recovery product for each OS with different GUIs, screens etc.

-- Fred Moore, president, Horison Inc.

I/O aggregation and virtualization: Storage is very costly, because the protocols are all different for data, for storage, or for inter-server communication, requiring different adapters, cables, and technologies. If there were one device on top of the rack or in the blade chassis that aggregates all the I/O and directs it to the target devices, it would reduce overall cost significantly.

-- Kaushik Roy, research analyst, data center technologies, Pacific Growth Equities LLC

Improved price/performance: Make storage drives faster and cheaper, perhaps using single-cell flash solid state disks (SSDs) or some other new technology. Make storage systems faster in general -- storage performance has not kept up with capacity increases.

-- Marc Staimer, president, Dragonslayer Consulting

Mainframe-like management features: The storage and data management capabilities for Unix, Linux, and Windows may be 15 years to 20 years behind that of the mainframe. Specifically, the mainframe has the most powerful policy-based storage management engine ever developed, called DFSMS. This software (though not perfect) implements data classification, allows user policies to initiate proactive tasks such as optimized data placement, automatic backup/recovery, and HSM (Hierarchical Storage Management), which is key for effectively implementing tiered storage. Nonmainframe systems software suppliers are trying hard to offer mainframe-like functionality. For example, nonmainframe systems are just now implementing thin provisioning. This first appeared on a mainframe in 1965 with OS/360. HSM first appeared in 1975 and there is still no effective cross-platform HSM for Unix, Linux, and Windows.

-- Fred Moore

Fibre Channel over Ethernet (FCoE): What if a new type of enhanced Ethernet (aka Data-Center Ethernet or DCE) which utilizes Ethernet extensions to achieve the reliability and efficiency of Fibre Channel can be designed? If this can be done, then it would be possible to encapsulate Fibre Channel data within the Ethernet frame and thus meld storage, messaging, VoIP, video, and other data over the same "unified" physical network.

-- Kaushik Roy

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Stop Defending Everything
Kevin Kurzawa, Senior Information Security Auditor,  2/12/2020
Small Business Security: 5 Tips on How and Where to Start
Mike Puglia, Chief Strategy Officer at Kaseya,  2/13/2020
Architectural Analysis IDs 78 Specific Risks in Machine-Learning Systems
Jai Vijayan, Contributing Writer,  2/13/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Current Issue
6 Emerging Cyber Threats That Enterprises Face in 2020
This Tech Digest gives an in-depth look at six emerging cyber threats that enterprises could face in 2020. Download your copy today!
Flash Poll
How Enterprises Are Developing and Maintaining Secure Applications
How Enterprises Are Developing and Maintaining Secure Applications
The concept of application security is well known, but application security testing and remediation processes remain unbalanced. Most organizations are confident in their approach to AppSec, although others seem to have no approach at all. Read this report to find out more.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-9016
PUBLISHED: 2020-02-16
Dolibarr 11.0 allows XSS via the joinfiles, topic, or code parameter, or the HTTP Referer header.
CVE-2020-9013
PUBLISHED: 2020-02-16
Arvato Skillpipe 3.0 allows attackers to bypass intended print restrictions by deleting <div id="watermark"> from the HTML source code.
CVE-2020-9007
PUBLISHED: 2020-02-16
Codoforum 4.8.8 allows self-XSS via the title of a new topic.
CVE-2020-9012
PUBLISHED: 2020-02-16
A cross-site scripting (XSS) vulnerability in the Import People functionality in Gluu Identity Configuration 4.0 allows remote attackers to inject arbitrary web script or HTML via the filename parameter.
CVE-2019-20456
PUBLISHED: 2020-02-16
Goverlan Reach Console before 9.50, Goverlan Reach Server before 3.50, and Goverlan Client Agent before 9.20.50 have an Untrusted Search Path that leads to Command Injection and Local Privilege Escalation via DLL hijacking.