News
5/6/2013
10:28 AM
George Crump
George Crump
Commentary
Connect Directly
RSS
E-Mail
50%
50%

Active Data Vs. Active Archive

We need better metrics to help us decide what data should be on primary storage and what should be on archive storage.

In my last column I discussed how what we used to consider active data is changing. We now have to look at the potential working set instead of the actual working set. Thanks to initiatives like real-time analytics, some data that we used to classify as archivable now needs to be at the ready. If this is the case, what is the role of archive? How do disk and tape archives participate in an increasingly active world?

The key to a balanced storage strategy, even with all this active data, is to change how we decide to archive a certain set of data. Under the current archive methodology the most common decision point was last modification date. In other words, data that is X days/years old can be archived, everything else has to stay on primary storage. The problem with this methodology is it is not compatible with real-time analytics and not even really compatible with the way users use data.

We need better metrics to help us decide what data should be on primary storage and what should be on archive storage. A key criteria is going to be what data, if it needs to be accessed, will need to be delivered instantly -- in other words, something that may need to be analyzed in the future. This data should probably not go to an archive no matter how old it gets since it could have a statistical probability of value.

[ Learn more about virtual desktop infrastructure. Read VDI Performance And Cost: A Deeper Dive. ]

However, if we know for sure that a certain data set will not be part of a real-time processing application or be needed for analytics then lets archive it as soon as possible and not even wait for it to age. Maybe some of this data could even spend all of its data lifecycle on archive storage because the performance of the archive is "good enough" for the use case.

There is also the need to understand relationships between files. As a simple example, I am writing a couple of books right now. Each of those books have multiple iterations on the file name but large chunks of the content within those files are the same. Each draft gets a different file name. When I get to the end of any of these books, I really don't think I will need all of these drafts but, because all data has become a "you never know" situation, I will want to keep all of them around but I doubt I will ever access them again.

The question is how many of these drafts will I require instant access to and how many could I wait 10 minutes before I view them? For my purposes, all I really will need is the final copy and maybe a couple of the iterations. It would be nice to have software analyze this data and keep versions of the files with the most significant internal changes and then archive the rest.

Interestingly, one of the things we are learning from our primary storage deduplication test is how big of a role this technology can play in these circumstances. Essentially, I can keep all of the files with minimal impact on space utilization. And since they can be disk-based, retrieval time is excellent.

Another classification point is how is that data acted on when recovered? From beginning to end, or at some random point in the file? Basically, can the data be utilized sequentially? If this is the case, then just the front section of that data needs to be stored on primary storage, enough so that it can start being accessed while the back end catches up and the users see no delay in response time. This capability will require a file system intelligent enough to deliver data from two different sources at the same time.

When these attributes of the data are known and understood, then it can be properly placed in the proper types of storage in the data center. Data that whose recovery need is random and unpredictable will need to go on fast storage if analytics are being used. Data that is very similar to other data can be archived or deduplicated.

This archive, depending on what the known recovery need is, can easily be tape based because for a large chunk of the data set how quickly it is recovered is less import than how cost effectively can it be stored.

Comment  | 
Print  | 
More Insights
Comments
Oldest First  |  Newest First  |  Threaded View
dave@qstar
50%
50%
dave@qstar,
User Rank: Apprentice
5/10/2013 | 1:40:58 PM
re: Active Data Vs. Active Archive
Hi George - very interesting article (as always). I am a board member of the Active Archive Alliance and SVP Sales for QStar. My main comments are with this paragraph.

"We need better metrics to help us decide what
data should be on primary storage and what should be on archive storage.
A key criteria is going to be what data, if it needs to be accessed,
will need to be delivered instantly -- in other words, something that
may need to be analyzed in the future. This data should probably not go
to an archive no matter how old it gets since it could have a
statistical probability of value".

This assumes that archive storage is slow and primary storage is fast, which is not necessarily correct. Active Archive solutions can use tape but can also use disk or object storage, which is not slow. Accessibility and instant delivery can be provided by Object Storage solutions as an active archive. The key point is getting data away from the primary storage environment, and the constant backup regime that is associated with it, once data is no longer changing. Archives secure data through copy or replication at the time of ingestion removing the on-going need for backup.

Creating hybrid archives is the answer (using disk-based and tape based technology) to your question, and using "Versioning" to store multiple iterations of a file over time is possible and included in many archive solutions. As you point out versions can be stored without significantly consuming capacity. IF you now your data you can move it (perhaps automatically) to the correct archive technology for long-term preservation.

I agree that metrics can always be improved, currently file metadata is about the only way you can decide which files should be moved to fast archive and which to slower, but for many organizations, that is enough.
Register for Dark Reading Newsletters
Partner Perspectives
What's This?
In a digital world inundated with advanced security threats, Intel Security seeks to transform how we live and work to keep our information secure. Through hardware and software development, Intel Security delivers robust solutions that integrate security into every layer of every digital device. In combining the security expertise of McAfee with the innovation, performance, and trust of Intel, this vision becomes a reality.

As we rely on technology to enhance our everyday and business life, we must too consider the security of the intellectual property and confidential data that is housed on these devices. As we increase the number of devices we use, we increase the number of gateways and opportunity for security threats. Intel Security takes the “security connected” approach to ensure that every device is secure, and that all security solutions are seamlessly integrated.
Featured Writers
White Papers
Cartoon
Current Issue
Dark Reading's October Tech Digest
Fast data analysis can stymie attacks and strengthen enterprise security. Does your team have the data smarts?
Flash Poll
10 Recommendations for Outsourcing Security
10 Recommendations for Outsourcing Security
Enterprises today have a wide range of third-party options to help improve their defenses, including MSSPs, auditing and penetration testing, and DDoS protection. But are there situations in which a service provider might actually increase risk?
Video
Slideshows
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2014-3409
Published: 2014-10-25
The Ethernet Connectivity Fault Management (CFM) handling feature in Cisco IOS 12.2(33)SRE9a and earlier and IOS XE 3.13S and earlier allows remote attackers to cause a denial of service (device reload) via malformed CFM packets, aka Bug ID CSCuq93406.

CVE-2014-4620
Published: 2014-10-25
The EMC NetWorker Module for MEDITECH (aka NMMEDI) 3.0 build 87 through 90, when EMC RecoverPoint and Plink are used, stores cleartext RecoverPoint Appliance credentials in nsrmedisv.raw log files, which allows local users to obtain sensitive information by reading these files.

CVE-2014-4623
Published: 2014-10-25
EMC Avamar 6.0.x, 6.1.x, and 7.0.x in Avamar Data Store (ADS) GEN4(S) and Avamar Virtual Edition (AVE), when Password Hardening before 2.0.0.4 is enabled, uses UNIX DES crypt for password hashing, which makes it easier for context-dependent attackers to obtain cleartext passwords via a brute-force a...

CVE-2014-4624
Published: 2014-10-25
EMC Avamar Data Store (ADS) and Avamar Virtual Edition (AVE) 6.x and 7.0.x through 7.0.2-43 do not require authentication for Java API calls, which allows remote attackers to discover grid MCUser and GSAN passwords via a crafted call.

CVE-2014-6151
Published: 2014-10-25
CRLF injection vulnerability in IBM Tivoli Integrated Portal (TIP) 2.2.x allows remote authenticated users to inject arbitrary HTTP headers and conduct HTTP response splitting attacks via unspecified vectors.

Best of the Web
Dark Reading Radio
Archived Dark Reading Radio
Follow Dark Reading editors into the field as they talk with noted experts from the security world.