News
6/10/2011
03:46 PM
George Crump
George Crump
Commentary
50%
50%

Big Data A Big Backup Challenge

Backing up Big Data requires a system that is fast, cost effective, and reliable. These are conflicting terms in the world of storage.

Big Data is, well, big, and size is not the only challenge it places on backup. It also is a backup application's worst nightmare because many Big Data environments consist of millions or even billions of small files. How do you design a backup infrastructure that will support the Big Data realities?

First, examine what data does not have to be backed up at all because it can be easily regenerated from another system that is already being backed up. A good example is report data generated from a database.

Once this data is identified, exclude it. Next, move on to the real problem at hand--unique data that can't be re-created. This is often discrete file data that is feed into the environment via devices or sensors. It is essentially point-in-time data that can't be regenerated. This data is often copied within the Big Data environment so that it can be safely analyzed. As a result, there can be a fair amount of redundancy in the Big Data environment. This is an ideal role for disk backup devices. They are better suited for the small file transfers and, with deduplication, can eliminate redundancy and compress much of the data to optimize backup capacity.

Effective optimization is critical since Big Data environments are measured in the 100's of terabytes and will soon be measured in the dozens of petabytes. It is also important to consider just how far you want to extend disk backup's role in this environment.

Clearly deduplicated disk is needed, but it probably should be used in conjunction with tape--not in replacement of it. Again, often a large section of this data can't be regenerated. Loss of this data is permanent and potentially ruins the Big Data sample. You can't be too careful and, at the same time, you have to control capacity costs so that the value of the decisions that Big Data allows are not overshadowed by the expense of keeping the data that supports them. We suggest a Big Data backup strategy that includes a large tier of optimized backup disk to store the near-term data set for as long as possible, seven to 10 years worth of data being ideal, then using tape for the decades worth of less frequently accessed data.

Alternatively you could go with the suggestion we made in a recent article "Tape's Role in Big Data" and combine the two into a single active archive--essentially a single file system that seamlessly marries all of these media types. This would consist of fast but low capacity (by Big Data standards) primary disk for data ingestion and active analytical processing, optimized disk for more near term data that is not being analyzed at that moment, and tape for long-term storage. In this environment data can be sent to all tiers of storage as it is created or modified so that less or even no backups need to be done.

Big Data is a big storage challenge, not only to store the data but to put it on a fast enough platform that meaningful analytics can be run while at the same time, being cost effective and reliable. These are conflicting terms in the world of storage. Resolving that conflict is going to require a new way of doing things.

Follow Storage Switzerland on Twitter

George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Storage Switzerland's disclosure statement.

Comment  | 
Print  | 
More Insights
Register for Dark Reading Newsletters
White Papers
Cartoon
Current Issue
Flash Poll
10 Recommendations for Outsourcing Security
10 Recommendations for Outsourcing Security
Enterprises today have a wide range of third-party options to help improve their defenses, including MSSPs, auditing and penetration testing, and DDoS protection. But are there situations in which a service provider might actually increase risk?
Video
Slideshows
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2013-6501
Published: 2015-03-30
The default soap.wsdl_cache_dir setting in (1) php.ini-production and (2) php.ini-development in PHP through 5.6.7 specifies the /tmp directory, which makes it easier for local users to conduct WSDL injection attacks by creating a file under /tmp with a predictable filename that is used by the get_s...

CVE-2014-9652
Published: 2015-03-30
The mconvert function in softmagic.c in file before 5.21, as used in the Fileinfo component in PHP before 5.4.37, 5.5.x before 5.5.21, and 5.6.x before 5.6.5, does not properly handle a certain string-length field during a copy of a truncated version of a Pascal string, which might allow remote atta...

CVE-2014-9653
Published: 2015-03-30
readelf.c in file before 5.22, as used in the Fileinfo component in PHP before 5.4.37, 5.5.x before 5.5.21, and 5.6.x before 5.6.5, does not consider that pread calls sometimes read only a subset of the available data, which allows remote attackers to cause a denial of service (uninitialized memory ...

CVE-2014-9705
Published: 2015-03-30
Heap-based buffer overflow in the enchant_broker_request_dict function in ext/enchant/enchant.c in PHP before 5.4.38, 5.5.x before 5.5.22, and 5.6.x before 5.6.6 allows remote attackers to execute arbitrary code via vectors that trigger creation of multiple dictionaries.

CVE-2014-9709
Published: 2015-03-30
The GetCode_ function in gd_gif_in.c in GD 2.1.1 and earlier, as used in PHP before 5.5.21 and 5.6.x before 5.6.5, allows remote attackers to cause a denial of service (buffer over-read and application crash) via a crafted GIF image that is improperly handled by the gdImageCreateFromGif function.

Dark Reading Radio
Archived Dark Reading Radio
Good hackers--aka security researchers--are worried about the possible legal and professional ramifications of President Obama's new proposed crackdown on cyber criminals.