Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

News

2/23/2008
12:42 AM
Terry Sweeney
Terry Sweeney
Commentary
Connect Directly
Facebook
Twitter
RSS
E-Mail
50%
50%

Up To The Challenge?

Mask complexity, improve performance, and automate every last function possible -- those, in a giant nutshell, are the biggest engineering challenges for storage in the next several years, according to some big thinkers who've deployed a SAN or two in their time.

Mask complexity, improve performance, and automate every last function possible -- those, in a giant nutshell, are the biggest engineering challenges for storage in the next several years, according to some big thinkers who've deployed a SAN or two in their time.A few days ago, I mentioned the list drawn up by the National Academy of Engineering of the largest technical challenges of the next century. It got me wondering how storage experts whose opinions I trust might answer where this sector of IT's concerned. I also reached out to a few vendors to see where they are with their thinking. I'll post their responses next week.

In the meantime, here's what a power user, two long-time storage consultants, and a financial analyst identified as storage's biggest challenges. Everyone -- vendor and nonvendor alike -- cited the challenge of "greening up" storage -- reducing its carbon footprint and powering requirements.

Unified storage: A truly open, usable, standardized, storage 'grid' capability out-of-the-box for enterprise environments is the biggest desired technology breakthrough in the storage realm. It would be a marriage of virtual file systems, storage virtualization technologies, tiered storage mechanisms for attaining information life cycle management (ILM), dynamic on-demand provisioning, data de-duplication, and have built-in replication and DR (redundancy) capabilities as well as all the essential capabilities for data caching and levels of RAID to meet performance requirements. It would be a self-healing and re-routing infrastructure, distributed across multiple geographical locations (data centers) with enough intelligence to recover from any type of issue with minimal human intervention required (if at all).

-- Harold Shapiro, senior VP and CIO of Indieroad.net

Hardware and software interoperability: Meaning any storage management software product can run on any server/OS and manage any vendor's product/device. Disk formats are still unique for each OS! As of today, proprietary systems are still winning the war -- certain software only works with specific hardware. IT, and specifically the storage industry, along with the SNIA now, have been chasing true plug-and-play interoperability for nearly 20 years and it's still not there. The goal is for a single backup/recovery product to support the storage hardware on Unix, Linux, and Windows without having a specific backup/recovery product for each OS with different GUIs, screens etc.

-- Fred Moore, president, Horison Inc.

I/O aggregation and virtualization: Storage is very costly, because the protocols are all different for data, for storage, or for inter-server communication, requiring different adapters, cables, and technologies. If there were one device on top of the rack or in the blade chassis that aggregates all the I/O and directs it to the target devices, it would reduce overall cost significantly.

-- Kaushik Roy, research analyst, data center technologies, Pacific Growth Equities LLC

Improved price/performance: Make storage drives faster and cheaper, perhaps using single-cell flash solid state disks (SSDs) or some other new technology. Make storage systems faster in general -- storage performance has not kept up with capacity increases.

-- Marc Staimer, president, Dragonslayer Consulting

Mainframe-like management features: The storage and data management capabilities for Unix, Linux, and Windows may be 15 years to 20 years behind that of the mainframe. Specifically, the mainframe has the most powerful policy-based storage management engine ever developed, called DFSMS. This software (though not perfect) implements data classification, allows user policies to initiate proactive tasks such as optimized data placement, automatic backup/recovery, and HSM (Hierarchical Storage Management), which is key for effectively implementing tiered storage. Nonmainframe systems software suppliers are trying hard to offer mainframe-like functionality. For example, nonmainframe systems are just now implementing thin provisioning. This first appeared on a mainframe in 1965 with OS/360. HSM first appeared in 1975 and there is still no effective cross-platform HSM for Unix, Linux, and Windows.

-- Fred Moore

Fibre Channel over Ethernet (FCoE): What if a new type of enhanced Ethernet (aka Data-Center Ethernet or DCE) which utilizes Ethernet extensions to achieve the reliability and efficiency of Fibre Channel can be designed? If this can be done, then it would be possible to encapsulate Fibre Channel data within the Ethernet frame and thus meld storage, messaging, VoIP, video, and other data over the same "unified" physical network.

-- Kaushik Roy

Comment  | 
Print  | 
More Insights
Comments
Oldest First  |  Newest First  |  Threaded View
Data Leak Week: Billions of Sensitive Files Exposed Online
Kelly Jackson Higgins, Executive Editor at Dark Reading,  12/10/2019
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Write a Caption, Win a Starbucks Card! Click Here
Latest Comment: Our Endpoint Protection system is a little outdated... 
Current Issue
The Year in Security: 2019
This Tech Digest provides a wrap up and overview of the year's top cybersecurity news stories. It was a year of new twists on old threats, with fears of another WannaCry-type worm and of a possible botnet army of Wi-Fi routers. But 2019 also underscored the risk of firmware and trusted security tools harboring dangerous holes that cybercriminals and nation-state hackers could readily abuse. Read more.
Flash Poll
Rethinking Enterprise Data Defense
Rethinking Enterprise Data Defense
Frustrated with recurring intrusions and breaches, cybersecurity professionals are questioning some of the industrys conventional wisdom. Heres a look at what theyre thinking about.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2019-19767
PUBLISHED: 2019-12-12
The Linux kernel before 5.4.2 mishandles ext4_expand_extra_isize, as demonstrated by use-after-free errors in __ext4_expand_extra_isize and ext4_xattr_set_entry, related to fs/ext4/inode.c and fs/ext4/super.c, aka CID-4ea99936a163.
CVE-2019-19768
PUBLISHED: 2019-12-12
In the Linux kernel 5.4.0-rc2, there is a use-after-free (read) in the __blk_add_trace function in kernel/trace/blktrace.c (which is used to fill out a blk_io_trace structure and place it in a per-cpu sub-buffer).
CVE-2019-19769
PUBLISHED: 2019-12-12
In the Linux kernel 5.3.10, there is a use-after-free (read) in the perf_trace_lock_acquire function (related to include/trace/events/lock.h).
CVE-2019-19770
PUBLISHED: 2019-12-12
In the Linux kernel 4.19.83, there is a use-after-free (read) in the debugfs_remove function in fs/debugfs/inode.c (which is used to remove a file or directory in debugfs that was previously created with a call to another debugfs function such as debugfs_create_file).
CVE-2019-19771
PUBLISHED: 2019-12-12
The lodahs package 0.0.1 for Node.js is a Trojan horse, and may have been installed by persons who mistyped the lodash package name. In particular, the Trojan horse finds and exfiltrates cryptocurrency wallets.