Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

News

12/7/2009
03:46 PM
George Crump
George Crump
Commentary
50%
50%

Failure To Move

Don MacVittie in his blog over at F5 commented recently on an article that we have written "What is File Virtualization?" indicating that we missed a key issue in dealing with how to handle it when your virtualization box goes down. While my defense could be that th

Don MacVittie in his blog over at F5 commented recently on an article that we have written "What is File Virtualization?" indicating that we missed a key issue in dealing with how to handle it when your virtualization box goes down. While my defense could be that the subject is beyond the scope of a primer, it is not beyond the scope of this blog. If you are considering a tiered storage model then what do you do when your data mover fails?A key consideration when moving data between tiers of storage is what to do when the box that is responsible for that movement goes down. As I have written in many blog entries, there are plenty of ways to move data between tiers of storage but the most common seem to be manual, automated data movement software and a file virtualization or global file system. How do each of these allow you to still get to your data if they have failed?

The manual method requires no change. You were manually copying data and I would assume telling users something like "if its not here, check there". Other than the storage system itself there is really nothing to break. The problem with the manual method is of course that it is manual and most IT professionals have plenty of things to do during the day, adding another manual task to that list is not going to be popular. Of course the manual method may not scale well either. That archive point has to remain basically the same.

The automated software migration application approach typically will move data based on file policies for you. To help users find their way back to their files typically means that the software will leave behind a stub file that will point to the new location of the file. If the application crashes or those stub files get corrupted, how do you get to your files? Depends in part on the application software. If it stores migrated files as blobs in a database, getting to that data could be quite challenging. If the data gets migrated to tape then you are probably going to need the application back up and running prior to getting to your data. If the stub that is left behind is leveraging shortcuts or symbolic links then they still should work even if the software has failed, but things tend to get messy with these approaches and you still have the issue of millions of small (now smaller) files on your primary storage.

Even if the automated software approach moves data to another disk and keeps it in native file format, it often is stored in a nonsensical manner. In theory you could manually path to the file system and find your data but the destination file system sometimes doesn't look at all like the file system you migrated from. Often its just a bunch of date stamped directory names with files dumped inside them. Essentially the application assumes that you will always have the application available to recover data. That may or may not be a good assumption.

File virtualization differs in that the meta data, the information about where the file actually resides, is stored within the appliance. Typically these appliances are highly available and can be implemented in redundant pairs. The file systems they virtualize remain untouched and they can be accessed manually if the file virtualization engine fails for some reason. Now you do need to know where the file virtualization system is placing data, so having a copy of your configurations can come in handy, but you can structure the target devices to have the same logical representation of the directory structure of the source devices.

Finally some file virtualization systems can help get around the storage system failure issue as well. They can replicate moved data to a second NAS system and then in the event of a failure on the primary archive reroute users to the remaining system. While file virtualization may not be the 'be all and end all', it certainly may play a role in making true tiered storage a reality.

Track us on Twitter: http://twitter.com/storageswiss

Subscribe to our RSS feed.

George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.

 

Recommended Reading:

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
COVID-19: Latest Security News & Commentary
Dark Reading Staff 8/3/2020
Pen Testers Who Got Arrested Doing Their Jobs Tell All
Kelly Jackson Higgins, Executive Editor at Dark Reading,  8/5/2020
New 'Nanodegree' Program Provides Hands-On Cybersecurity Training
Nicole Ferraro, Contributing Writer,  8/3/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Current Issue
Special Report: Computing's New Normal, a Dark Reading Perspective
This special report examines how IT security organizations have adapted to the "new normal" of computing and what the long-term effects will be. Read it and get a unique set of perspectives on issues ranging from new threats & vulnerabilities as a result of remote working to how enterprise security strategy will be affected long term.
Flash Poll
The Changing Face of Threat Intelligence
The Changing Face of Threat Intelligence
This special report takes a look at how enterprises are using threat intelligence, as well as emerging best practices for integrating threat intel into security operations and incident response. Download it today!
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-15820
PUBLISHED: 2020-08-08
In JetBrains YouTrack before 2020.2.6881, the markdown parser could disclose hidden file existence.
CVE-2020-15821
PUBLISHED: 2020-08-08
In JetBrains YouTrack before 2020.2.6881, a user without permission is able to create an article draft.
CVE-2020-15823
PUBLISHED: 2020-08-08
JetBrains YouTrack before 2020.2.8873 is vulnerable to SSRF in the Workflow component.
CVE-2020-15824
PUBLISHED: 2020-08-08
In JetBrains Kotlin before 1.4.0, there is a script-cache privilege escalation vulnerability due to kotlin-main-kts cached scripts in the system temp directory, which is shared by all users by default.
CVE-2020-15825
PUBLISHED: 2020-08-08
In JetBrains TeamCity before 2020.1, users with the Modify Group permission can elevate other users' privileges.