News
2/25/2009
06:40 PM
George Crump
George Crump
Commentary
Connect Directly
RSS
E-Mail
50%
50%

Better Storage Practices To Improve Backup

Backup is the thorn in the side of many otherwise smoothly running IT operations. There is probably little coincidence that the newest hire is almost always assigned the backup process or the ramification for missing the assignments meeting. The truth is that backup should be simple -- all you're doing is copying data to tape. The problem in general has nothing to do with the backup process, it has more to do with how primary storage is managed and optimized.

Backup is the thorn in the side of many otherwise smoothly running IT operations. There is probably little coincidence that the newest hire is almost always assigned the backup process or the ramification for missing the assignments meeting. The truth is that backup should be simple -- all you're doing is copying data to tape. The problem in general has nothing to do with the backup process, it has more to do with how primary storage is managed and optimized.The primary problem is size; there is too much data, too much that is stagnant and unchanging. All of this unchanged data has to traverse most often the standard IP network or, best case, a SAN. While disk backup targets, especially those with data deduplication, help to optimally store this backup data, it really doesn't fix the problem at its source.

Fixing problems at the source is where you can derive the most improvement for the least investment. For example, look at real-time compression products like those from Storwize. These solutions compress file-based data in place and, in most cases, achieve better than 60% reduction of data. For user access, data is compressed and decompressed on the fly transparently, with little, if any, performance impacts. The backup application can access the compressed data in its compressed format, effectively cutting in half the amount of data to transport across the network and subsequently stored on backup disk and/or backup tape. Interesting that these inline compression appliances are compatible with deduplication appliances and actually improve their effective reduction rates.

The next step is to look into software that will reduce the amount of data sent across the network in the first place. The source-side reduction backup applications can either perform deduplication like EMC's Avamar or block-level incremental backups like SyncSort's Backup Express. While deduplication has the advantage of maximum backup disk storage optimization, block-level incremental technology (BLI) creates a backup volume that can be utilized for other purposes because it itself is a readable file system. Also, BLI has less impact on the server being protected, making more frequent backups more practical.

Finally, it makes sense to archive all this stagnant data off primary storage. Most studies indicate that more than 80% of this data can be archived and moved out of the backup process in its entirety. Using disk archive products like those from Permabit, NexSAN, or Copan Systems can be more aggressive than with tape-based systems, causing a dramatic reduction in your investment in primary storage and power costs.

Getting data to this archive can be as simple as using an operating systems move command to get the data to it. In our next entry we will look at a new technique, an optimized move.

View our Webcast on Primary Storage Optimization to learn more.

Track us on Twitter: http://twitter.com/storageswiss.

Subscribe to our RSS feed.

George Crump is founder of Storage Switzerland, an analyst firm focused on the virtualization and storage marketplaces. It provides strategic consulting and analysis to storage users, suppliers, and integrators. An industry veteran of more than 25 years, Crump has held engineering and sales positions at various IT industry manufacturers and integrators. Prior to Storage Switzerland, he was CTO at one of the nation's largest integrators.

Comment  | 
Print  | 
More Insights
Register for Dark Reading Newsletters
White Papers
Flash Poll
Current Issue
Cartoon
Threat Intel Today
Threat Intel Today
The 397 respondents to our new survey buy into using intel to stay ahead of attackers: 85% say threat intelligence plays some role in their IT security strategies, and many of them subscribe to two or more third-party feeds; 10% leverage five or more.
Video
Slideshows
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2014-0640
Published: 2014-08-20
EMC RSA Archer GRC Platform 5.x before 5.5 SP1 allows remote authenticated users to bypass intended restrictions on resource access via unspecified vectors.

CVE-2014-0641
Published: 2014-08-20
Cross-site request forgery (CSRF) vulnerability in EMC RSA Archer GRC Platform 5.x before 5.5 SP1 allows remote attackers to hijack the authentication of arbitrary users.

CVE-2014-2505
Published: 2014-08-20
EMC RSA Archer GRC Platform 5.x before 5.5 SP1 allows remote attackers to trigger the download of arbitrary code, and consequently change the product's functionality, via unspecified vectors.

CVE-2014-2511
Published: 2014-08-20
Multiple cross-site scripting (XSS) vulnerabilities in EMC Documentum WebTop before 6.7 SP1 P28 and 6.7 SP2 before P14 allow remote attackers to inject arbitrary web script or HTML via the (1) startat or (2) entryId parameter.

CVE-2014-2515
Published: 2014-08-20
EMC Documentum D2 3.1 before P24, 3.1SP1 before P02, 4.0 before P11, 4.1 before P16, and 4.2 before P05 does not properly restrict tickets provided by D2GetAdminTicketMethod and D2RefreshCacheMethod, which allows remote authenticated users to gain privileges via a request for a superuser ticket.

Best of the Web
Dark Reading Radio
Archived Dark Reading Radio
Three interviews on critical embedded systems and security, recorded at Black Hat 2014 in Las Vegas.