News

6/15/2010
10:00 AM
George Crump
George Crump
Commentary
50%
50%

Revisiting The Keep It All Forever Retention Strategy

Each day a seemingly new regulation is being placed on businesses and almost every one of these regulations adds to the data management burden in the data center. In the past I have advised against the keep it all forever mentality of data retention but now it may just be the only way left to protect the business.

Each day a seemingly new regulation is being placed on businesses and almost every one of these regulations adds to the data management burden in the data center. In the past I have advised against the keep it all forever mentality of data retention but now it may just be the only way left to protect the business.The first justification for a keep it all forever mentality is that the technology to archive years and even decades worth of data is more viable than ever. If you are going to keep everything, accessibility to this archive is critical. Disk based archive technology meets that demand and has improved on its ability to scale to multiple petabytes worth of capacity. As well as the ability to optimize storage capacity through compression and deduplication. Tape as an archive does not want to be left out either. As we discuss in our article "What is LTFS?" LTO-5 brings with it a new ease of use factor to tape based data. LTFS potentially makes reading data from tape as simple as reading data from a USB thumb drive.

The cloud as an archive cannot be left out of the discussion either. It provides the most incrementally scalable option and for businesses that don't have petabytes of storage to archive it may be the most cost effective option as well. Depending on the cloud storage provider it may also provide a better long term operational efficiency as well, relieving the data center from the data management burden in addition to saving upfront costs.

The second justification for a keep it all forever mentality is the improved ability to move older data to secondary storage. Data archiving software has certainly improved but most interesting is file virtualization products. As we discuss in our article "What is File Virtualization", the ability to seamlessly move data to and from an archive optimized disk repository without constant IT intervention is critical. In a keep everything strategy, policies need to be set broadly and then executed automatically. There may no longer be the time or resources available to manage down at the file level. As the number of regulations increase as well as focused enforcement of those regulations continues, the amount of recovery/discovery requests are going to skyrocket. IT can not afford to nor does it really want to be involved in delivering this data. The transparent recovery that file virtualization provides is key.

In our next entry we will discuss what else has to be in place for a keep it all forever data retention strategy to be successful. We will then conclude this series with a cost justification of a keep everything strategy vs. a strict data elimination strategy.

Track us on Twitter: http://twitter.com/storageswiss

Subscribe to our RSS feed.

George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.

Comment  | 
Print  | 
More Insights
Comments
Oldest First  |  Newest First  |  Threaded View
New Mexico Man Sentenced on DDoS, Gun Charges
Dark Reading Staff 5/18/2018
Is Threat Intelligence Garbage?
Chris McDaniels, Chief Information Security Officer of Mosaic451,  5/23/2018
More Than Half of Users Reuse Passwords
Curtis Franklin Jr., Senior Editor at Dark Reading,  5/24/2018
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Write a Caption, Win a Starbucks Card! Click Here
Latest Comment: Shhh!  They're watching... And you have a laptop?  
Current Issue
Flash Poll
[Strategic Security Report] Navigating the Threat Intelligence Maze
[Strategic Security Report] Navigating the Threat Intelligence Maze
Most enterprises are using threat intel services, but many are still figuring out how to use the data they're collecting. In this Dark Reading survey we give you a look at what they're doing today - and where they hope to go.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2018-11440
PUBLISHED: 2018-05-25
Liblouis 3.5.0 has a stack-based Buffer Overflow in the function parseChars in compileTranslationTable.c.
CVE-2013-3018
PUBLISHED: 2018-05-24
The AXIS webapp in deploy-tomcat/axis in IBM Tivoli Application Dependency Discovery Manager (TADDM) 7.1.2 and 7.2.0 through 7.2.1.4 allows remote attackers to obtain sensitive configuration information via a direct request, as demonstrated by happyaxis.jsp. IBM X-Force ID: 84354.
CVE-2013-3023
PUBLISHED: 2018-05-24
IBM Tivoli Application Dependency Discovery Manager (TADDM) 7.1.2 and 7.2.0 through 7.2.1.4 might allow remote attackers to obtain sensitive information about Tomcat credentials by sniffing the network for a session in which HTTP is used. IBM X-Force ID: 84361.
CVE-2013-3024
PUBLISHED: 2018-05-24
IBM WebSphere Application Server (WAS) 8.5 through 8.5.0.2 on UNIX allows local users to gain privileges by leveraging improper process initialization. IBM X-Force ID: 84362.
CVE-2018-5674
PUBLISHED: 2018-05-24
This vulnerability allows remote attackers to execute arbitrary code on vulnerable installations of Foxit Reader before 9.1 and PhantomPDF before 9.1. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file. The specific flaw...