Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

News

5/9/2008
09:15 AM
George Crump
George Crump
Commentary
50%
50%

Data Moveage: How To Move Data And Live To Tell About It

In a previous entry I wrote about the importance of moving data from primary storage to another platform. The roadblock is how to move that data from expensive storage to secondary storage. The traditional approach of deploying an agent on every server that monitors all the files and then moves files that haven't been accessed to a lower class of storage hasn't worked well in the enterprise. There are a variety of reasons, but most of the issues are the deployment and management of that many age

In a previous entry I wrote about the importance of moving data from primary storage to another platform. The roadblock is how to move that data from expensive storage to secondary storage. The traditional approach of deploying an agent on every server that monitors all the files and then moves files that haven't been accessed to a lower class of storage hasn't worked well in the enterprise. There are a variety of reasons, but most of the issues are the deployment and management of that many agents, plus the challenge of leaving stub files (files that point to where the actual file was moved) and managing those files.There also are software applications or appliances that don't require an agent that will walk or crawl the file systems of servers in your environment to find files that meet a certain criteria, old age being one, and then move those files to an alternative class of storage. The challenge with this approach is that they are very slow in this crawling/auditing process and they still have to manage the stub files that are left behind. They also need unprecedented access to the network so they can walk the mounted file systems.

Both solutions can be made to work, and in some cases work well, but for a simpler approach I like to recommend either Automated Tiered Storage or File Virtualization, or both, in some cases.

Automated Tiered Storage (ATS), as far as I know, is only available from one vendor today, Compellent. They call it data progression. The technology moves data to different classes of storage at a block level. No software agents are needed, no file system scans, it just works. Because it is a block-level based technology, it even works on data that wouldn't typically be moved to less expensive storage. Databases, for example, show up as one big active file to most data movers; to an IDM system they are blocks of data, with some of those blocks very active, while others are not. The ones that are not active are moved to a lower storage class, like less expensive ATA disk.

ATS has the benefit of...well...working. It is simple, it is automatic, and it just works. The result is that people use it and see the benefits of the technology almost instantly.

The downside to ATS is that it requires a storage system that can support it. If you aren't ready to part with your existing system or if you need the ability to move data to a different type of storage that is designed for long-term retention, like a disk-based archive, then a File Virtualization Appliance is a good fit. Companies such as Attune and OnStor offer these type of products.

Think of file virtualization as a DNS server for files. I don't know what IP address yahoo.com is assigned to, but I know when I type yahoo.com in my Web browser it comes up. This is how file virtualization products work. When I request george.doc I don't need to know what file server it is on, it's just served up to me.

File virtualization has the benefit of being able to move between different storage platforms, for example, from a primary NAS to a data deduplication appliance. The installation is a little more intrusive because the systems have to sit in-band between the users and the storage. Similar to ATS, it just works as well.

Data movement, the inhibitor to leveraging lower-cost storage, is now simple to implement and manage. You can either invest in a storage platform that has it built in or implement file virtualization to move data across multiple platforms.

George Crump is founder of Storage Switzerland, an analyst firm focused on the virtualization and storage marketplaces. It provides strategic consulting and analysis to storage users, suppliers, and integrators. An industry veteran of more than 25 years, Crump has held engineering and sales positions at various IT industry manufacturers and integrators. Prior to Storage Switzerland, he was CTO at one of the nation's largest integrators.

 

Recommended Reading:

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
COVID-19: Latest Security News & Commentary
Dark Reading Staff 8/3/2020
Pen Testers Who Got Arrested Doing Their Jobs Tell All
Kelly Jackson Higgins, Executive Editor at Dark Reading,  8/5/2020
Exploiting Google Cloud Platform With Ease
Dark Reading Staff 8/6/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Write a Caption, Win a Starbucks Card! Click Here
Latest Comment: This comment is waiting for review by our moderators.
Current Issue
Special Report: Computing's New Normal, a Dark Reading Perspective
This special report examines how IT security organizations have adapted to the "new normal" of computing and what the long-term effects will be. Read it and get a unique set of perspectives on issues ranging from new threats & vulnerabilities as a result of remote working to how enterprise security strategy will be affected long term.
Flash Poll
The Changing Face of Threat Intelligence
The Changing Face of Threat Intelligence
This special report takes a look at how enterprises are using threat intelligence, as well as emerging best practices for integrating threat intel into security operations and incident response. Download it today!
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-16219
PUBLISHED: 2020-08-07
Delta Electronics TPEditor Versions 1.97 and prior. An out-of-bounds read may be exploited by processing specially crafted project files. Successful exploitation of this vulnerability may allow an attacker to read/modify information, execute arbitrary code, and/or crash the application.
CVE-2020-16221
PUBLISHED: 2020-08-07
Delta Electronics TPEditor Versions 1.97 and prior. A stack-based buffer overflow may be exploited by processing a specially crafted project file. Successful exploitation of this vulnerability may allow an attacker to read/modify information, execute arbitrary code, and/or crash the application.
CVE-2020-16223
PUBLISHED: 2020-08-07
Delta Electronics TPEditor Versions 1.97 and prior. A heap-based buffer overflow may be exploited by processing a specially crafted project file. Successful exploitation of this vulnerability may allow an attacker to read/modify information, execute arbitrary code, and/or crash the application.
CVE-2020-16225
PUBLISHED: 2020-08-07
Delta Electronics TPEditor Versions 1.97 and prior. A write-what-where condition may be exploited by processing a specially crafted project file. Successful exploitation of this vulnerability may allow an attacker to read/modify information, execute arbitrary code, and/or crash the application.
CVE-2020-16227
PUBLISHED: 2020-08-07
Delta Electronics TPEditor Versions 1.97 and prior. An improper input validation may be exploited by processing a specially crafted project file not validated when the data is entered by a user. Successful exploitation of this vulnerability may allow an attacker to read/modify information, execute a...