Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

News

6/11/2008
03:11 PM
George Crump
George Crump
Commentary
50%
50%

Resurrecting Speed

In a recent entry I pronounced 'speed is dead' as it relates to solving the backup window problem. As the entry indicates, the NEED to reduce the backup window continues to be a desire. The ABILITY to reduce the backup window is the challenge. Due to the network infrastructure, the ability of the servers being protected to send that data fast enough, as well as a host of other issues, are the big limiters no

In a recent entry I pronounced 'speed is dead' as it relates to solving the backup window problem. As the entry indicates, the NEED to reduce the backup window continues to be a desire. The ABILITY to reduce the backup window is the challenge. Due to the network infrastructure, the ability of the servers being protected to send that data fast enough, as well as a host of other issues, are the big limiters now in backup window reduction.To collapse the backup window substantially, we need to understand that we are looking at the wrong end of the straw. The modern backup target, be it disk, tape, or something else, is plenty fast enough for most enterprises. To reduce the backup window is going to require either a substantial investment in upgrading the surrounding infrastructure, or reducing the amount of backup data that crosses the network, reducing the amount of data present on primary storage.

Improving the infrastructure is a budget issue for most customers we talk to. Most, if not all, are at 1 Gigabit Ethernet, so a dramatic move in infrastructure performance would have to come from moving most servers to the SAN or implementation of 10 GbE.

Reducing the amount of primary storage capacity can complement implementing a backup solution that moves less data across the network. To some extent, you probably have the ability to do this today with your current software. You could, for example, stop doing weekly full backups and just do incremental backups. This is less than desirable since it requires many more media mounts when doing a recovery. Many backup applications have the ability to do a consolidated full, where a baseline full is created and then subsequent incrementals can be consolidated to create a new master full.

The big shift in the network requirement for backup is using block-level incremental backups. NetApp does this with Open Systems SnapVault technology and Syncsort does this with Backup Express XRS. Block-level incremental backs up just the blocks that changed since the last backup. This is simpler than doing source-side data deduplication since the comparisons are on a volume-by-volume basis instead of across the whole enterprise. The result is almost no impact on the server during backup, and most backups typically completing in less than five minutes. This allows backups to be done repeatedly throughout the day with very little growth in backup storage.

Also, the backup target for these types of solutions is typically an active target, meaning that the backup target can be used as if it were any other file system, allowing for in-place recoveries, using the backup data set for test and development work or manual copy of data. Probably the most important component is that the move to secondary storage (tape, disk, deduplicated disk or VTL) is integrated into the process.

A complement to this is reducing the amount of primary storage capacity altogether. Implementing a Disk Archive System, like those available from Caringo, Copan Systems, Permabit and others, can do this. I have seen cases where, if-disk based archiving were implemented, as much as 80% of the data being backed up could be permanently archived. I detailed this a while back on in my entry on Data Keepage. An upcoming entry will examine some of the hardware platforms available to address this.

George Crump is founder of Storage Switzerland, an analyst firm focused on the virtualization and storage marketplaces. It provides strategic consulting and analysis to storage users, suppliers, and integrators. An industry veteran of more than 25 years, Crump has held engineering and sales positions at various IT industry manufacturers and integrators. Prior to Storage Switzerland, he was CTO at one of the nation's largest integrators.

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Commentary
Ransomware Is Not the Problem
Adam Shostack, Consultant, Entrepreneur, Technologist, Game Designer,  6/9/2021
Edge-DRsplash-11-edge-ask-the-experts
How Can I Test the Security of My Home-Office Employees' Routers?
John Bock, Senior Research Scientist,  6/7/2021
News
New Ransomware Group Claiming Connection to REvil Gang Surfaces
Jai Vijayan, Contributing Writer,  6/10/2021
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Write a Caption, Win an Amazon Gift Card! Click Here
Latest Comment: Zero Trust doesn't have to break your budget!
Current Issue
The State of Cybersecurity Incident Response
In this report learn how enterprises are building their incident response teams and processes, how they research potential compromises, how they respond to new breaches, and what tools and processes they use to remediate problems and improve their cyber defenses for the future.
Flash Poll
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-25414
PUBLISHED: 2021-06-17
A local file inclusion vulnerability was discovered in the captcha function in Monstra 3.0.4 which allows remote attackers to execute arbitrary PHP code.
CVE-2021-32078
PUBLISHED: 2021-06-17
An Out-of-Bounds Read was discovered in arch/arm/mach-footbridge/personal-pci.c in the Linux kernel through 5.12.11 because of the lack of a check for a value that shouldn't be negative, e.g., access to element -2 of an array, aka CID-298a58e165e4.
CVE-2021-31818
PUBLISHED: 2021-06-17
Affected versions of Octopus Server are prone to an authenticated SQL injection vulnerability in the Events REST API because user supplied data in the API request isn’t parameterised correctly. Exploiting this vulnerability could allow unauthorised access to database tables.
CVE-2021-34825
PUBLISHED: 2021-06-17
Quassel through 0.13.1, when --require-ssl is enabled, launches without SSL or TLS support if a usable X.509 certificate is not found on the local system.
CVE-2021-32944
PUBLISHED: 2021-06-17
A use-after-free issue exists in the DGN file-reading procedure in the Drawings SDK (All versions prior to 2022.4) resulting from the lack of proper validation of user-supplied data. This can result in a memory corruption or arbitrary code execution, allowing attackers to cause a denial-of-service c...