Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

News

5/30/2008
11:11 AM
George Crump
George Crump
Commentary
50%
50%

Speed's Dead

In my recent article on data deduplication on InformationWeek's sister site, Byte and Switch, a question of speed impact came up. As we talk to customers throughout the storage community about backup priorities, a surprising trend continues: the importance of shrinking the backup window has become less of a priority for disk to disk backup solutions. Why?

In my recent article on data deduplication on InformationWeek's sister site, Byte and Switch, a question of speed impact came up. As we talk to customers throughout the storage community about backup priorities, a surprising trend continues: the importance of shrinking the backup window has become less of a priority for disk to disk backup solutions. Why?Speed of the backup target is really not the issue anymore as a single LTO4 Tape can receive data at an amazing 120 MBs. Even in-line data deduplication devices that are supposed to sacrifice speed for advantages of inline deduplication processing are now receiving data at more than 1 TB per hour. Most servers, infrastructures, and even the backup software itself can't keep up with the ingestion capabilities of the modern backup target.

For disk to disk backup, customers are putting the priority on how well they store data long term, how can they improve recovery performance, and, in what seems to capture the most interest, how well they enhance the ability to replicate data to a disaster recovery site. In all of these cases, target side data deduplication provides solutions to this. In my next article on Byte and Switch, we will discuss the pros and cons of doing the deduplication inline vs. post processing.

For today's entry, though, there are two issues to discuss, but I only have space for one now, so the other I'll save for another day. What do centers with massive amounts of data, those that are mostly likely to actually move data faster than 1 TB an hour and that need to reduce the backup window, do?

About 40% of the users we work with have well over a 100 TBs of storage under management. Tape is staying. How do you integrate that into the process? In most cases, it's a separate move from the disk target back through the backup server. In smaller, sub-50 TB centers (it's amazing that 50 TBs is small!), that's not a massive challenge. In large centers I believe this is impractical and a different technology is needed -- backup virtualization.

Backup virtualization creates a virtual pool of the various backup targets and presents a consolidated target to the backup server. The backup virtualization appliance performs the movement of data between the targets, not the backup application.

In sites where you have TBs of data to move and need to do so quickly, consider backup server virtualization. With these solutions in place you can buy a small but really fast disk cache, trickle that to a relatively fast disk-based data deduplication appliance, leverage the deduplication's ability to DR that data across a thinner WAN segment and, when the time is right, move that data to tape. This can all be done without having to set up complex jobs in the backup application.

In an upcoming entry I will talk about some ideas for reducing the backup window by thinning the amount of data used in the backup process.

George Crump is founder of Storage Switzerland, an analyst firm focused on the virtualization and storage marketplaces. It provides strategic consulting and analysis to storage users, suppliers, and integrators. An industry veteran of more than 25 years, Crump has held engineering and sales positions at various IT industry manufacturers and integrators. Prior to Storage Switzerland, he was CTO at one of the nation's largest integrators.

 

Recommended Reading:

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
COVID-19: Latest Security News & Commentary
Dark Reading Staff 7/31/2020
'BootHole' Vulnerability Exposes Secure Boot Devices to Attack
Kelly Sheridan, Staff Editor, Dark Reading,  7/29/2020
Out-of-Date and Unsupported Cloud Workloads Continue as a Common Weakness
Robert Lemos, Contributing Writer,  7/28/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Current Issue
Special Report: Computing's New Normal, a Dark Reading Perspective
This special report examines how IT security organizations have adapted to the "new normal" of computing and what the long-term effects will be. Read it and get a unique set of perspectives on issues ranging from new threats & vulnerabilities as a result of remote working to how enterprise security strategy will be affected long term.
Flash Poll
The Threat from the Internetand What Your Organization Can Do About It
The Threat from the Internetand What Your Organization Can Do About It
This report describes some of the latest attacks and threats emanating from the Internet, as well as advice and tips on how your organization can mitigate those threats before they affect your business. Download it today!
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-14310
PUBLISHED: 2020-07-31
There is an issue on grub2 before version 2.06 at function read_section_as_string(). It expects a font name to be at max UINT32_MAX - 1 length in bytes but it doesn't verify it before proceed with buffer allocation to read the value from the font value. An attacker may leverage that by crafting a ma...
CVE-2020-14311
PUBLISHED: 2020-07-31
There is an issue with grub2 before version 2.06 while handling symlink on ext filesystems. A filesystem containing a symbolic link with an inode size of UINT32_MAX causes an arithmetic overflow leading to a zero-sized memory allocation with subsequent heap-based buffer overflow.
CVE-2020-5413
PUBLISHED: 2020-07-31
Spring Integration framework provides Kryo Codec implementations as an alternative for Java (de)serialization. When Kryo is configured with default options, all unregistered classes are resolved on demand. This leads to the "deserialization gadgets" exploit when provided data contains mali...
CVE-2020-5414
PUBLISHED: 2020-07-31
VMware Tanzu Application Service for VMs (2.7.x versions prior to 2.7.19, 2.8.x versions prior to 2.8.13, and 2.9.x versions prior to 2.9.7) contains an App Autoscaler that logs the UAA admin password. This credential is redacted on VMware Tanzu Operations Manager; however, the unredacted logs are a...
CVE-2019-11286
PUBLISHED: 2020-07-31
VMware GemFire versions prior to 9.10.0, 9.9.1, 9.8.5, and 9.7.5, and VMware Tanzu GemFire for VMs versions prior to 1.11.0, 1.10.1, 1.9.2, and 1.8.2, contain a JMX service available to the network which does not properly restrict input. A remote authenticated malicious user may request against the ...