Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Cloud Security

08:15 AM
Alan Zeichick
Alan Zeichick

12 Cloud Backup Tips to Protect Your Business's Back-End Servers

The cloud can offer cost-effective backups for enterprise web servers, file servers and other critical infrastructure. Here are a dozen tips on how to make cloud backups safe and efficient.

Every business should consider the cloud as a resource for backing up enterprise servers.

It's certainly a big improvement over stuffing nine-track tapes into a server, or even loading up an automated tape library: You don't have to worry about ferrying tapes off-site for safekeeping, or worrying that your tapes will run out of room, or even that a tape -- or backup disk -- will fail during the backup process.

That's not to say that cloud backups are free from their own challenges: They can consume considerable bandwidth, especially when data changes quickly each day. If you need to restore an entire server (or in the case of a building disaster, a whole array of servers), Internet download bandwidth won't be enough. You'll need to request a restoration hard drive to get back in business.

Here are 12 suggestions that can help ensure that cloud backups can best serve real business needs, ranging from preserving key files to preparing for a physical site disaster.

1. Front-load the cloud backup by shipping physical backup disks. Many cloud backup services will allow you to conduct the initial backup by using local hard drives. Those hard drives are sent -- in well-packed shipping cases -- to the cloud facility, where the data is loaded onto cloud servers. From there, future incremental backups are conducted by uploading through the Internet. While this seems to be an extra step, it's vital if there's a lot of data to be backed up because it would take too long to do the backup via an upload -- and the initial backup can be verified before those hard drives leave your facility.

2. Purchase at least one bare-metal server that's comparable to the server(s) being backed up to the cloud. If a server fails the smoke test -- there is a catastrophic hardware failure, for instance -- you'll need to bring a new server online quickly. The last thing you'll want to hear is that your "new" server is out of stock, and will ship next Thursday. Having that server available -- possibly racked and cabled, if you use racks -- can chop many days off that time. In fact, you can begin restoring immediately.

3. Use that spare bare-metal server to test the quality of your backups, and rehearse the restoration process. If there's a problem with your backups, or if you don't know how to do the recovery, better to figure that out in advance as part of an exercise. My suggestion is to comprehensively test cloud backups at least annually, to make sure your backups are good, and that your staff knows how to bring a new server online without mishaps.

4. If your cloud backup provider can ship you a physical hard drive for doing that restoration, use that option, both for rehearsals and for the real event. Download bandwidth is slow, slow, slow. Yes, it costs money to request that the cloud server restore to an external hard drive and overnight-courier that drive to you. Yes, it costs even more money if that's not an option in a real crisis. Bonus hint: If your cloud backup provider doesn't offer that service, find another service.

5. Make sure that your cloud backups can be used for recovering both individual files and entire servers. Some backups are file-by-file, which are great for recovering data from a file store, but not for restoring applications, including configuration files. Others can restore an entire server, but not easily retrieve a single lost file -- like a document your CEO erased. You need both capabilities.

6. Look at how far back your backup provider will store changed or deleted files. I recommend a minimum of a full six months. Yes, that much storage can be quite expensive. Yes, that will seem a bargain compared to the costs of recovering lost data, especially if you discover that server data was corrupted some time ago, and only discovered now.

7. Consider using cloud backups to supplement, not replace, local on-premises backups. The process I use: I use the cloud for disaster recovery for my off-site backups containing six months of data. My servers, though, are backed up continuously to a local storage array, that's big enough to store one week's worth of files. If a file is lost or a server crashes, I can restore from the local backup. In the case of a disaster, that's where the cloud comes in. In this case, the cloud has mainly replaced my off-site backup system, which involved shipping tapes to a secure facility. This is the best of all worlds.

8. If you are paranoid, make sure your cloud backup service's hardware isn't in your own local geography. My paranoia level is not high, so my cloud service's data center is in the same state where I live. Someone less risk-averse might use a cloud service on the other side of the country. (Be careful about storing your data in a cloud service in another country, due to HIPAA, GDPR and other compliance issues.)

9. If you are even more paranoid, make sure that your cloud backup provider has its own backup facility in another geography. Again, make sure it's in the same country, unless you are prepared to handle compliance issues. Note: Backup providers may charge extra for those additional levels of protection. Only you can decide whether that's worth the cost. (I've chosen not to worry about it, for my own business.)

10. Ask hard questions about the security of your cloud service's backups. Merely having them encrypted isn't enough. Who has access to your files? Who has access to look through file names? What happens if there's a warrant from law enforcement? What about compliance? Those are all concerns. In terms of file names: Letting an admin see directory listings with files called "John Smith Termination Documents" or "Hostile Takeover of XYZ Corp." or "Famous Celebrity Medical Test Results" can be very, very problematic.

11. Ensure that the cloud backup can intelligently handle complex file types, especially for files that are never closed. For example, you don't want to constantly back up an entire virtual machine's disk image -- which could be tens of terabytes in size -- if only one log file within that VM was changed. The same is true with, say, relational databases, CRM systems and other complex files that are changed incrementally. Look for the backup system to be aware of those special files. And be sure to test your ability to recover and work with those files.

12. Make sure your encryption keys, license keys, passwords and other vital recovery data are readily available. There's little point in restoring an application if it refuses to start due to a perceived license restriction. This could encompass operating systems, virtual machines, applications, plug-ins and more. That's another reason to test bare-metal recovery, because you'll certainly find failures that you've never anticipated. Be prepared to curse. But again, better to curse during an exercise, instead of when trying to bring your business online after a disaster. (Hint: Make sure that information is safe and readily accessible off-site. Crypto keys and license keys won’t help you if they're in a file cabinet that washed away in a flood.)

The cloud is an excellent resource for backups, but like with any other business technology tool, it needs to be used carefully. Think of cloud backup services as being cloud recovery services, and let that guide your planning.

Related posts:

Alan Zeichick is principal analyst at Camden Associates, a technology consultancy in Phoenix, Arizona, specializing in enterprise networking, cybersecurity, and software development. Follow him @zeichick.

Comment  | 
Print  | 
More Insights
Newest First  |  Oldest First  |  Threaded View
COVID-19: Latest Security News & Commentary
Dark Reading Staff 9/25/2020
Hacking Yourself: Marie Moe and Pacemaker Security
Gary McGraw Ph.D., Co-founder Berryville Institute of Machine Learning,  9/21/2020
Startup Aims to Map and Track All the IT and Security Things
Kelly Jackson Higgins, Executive Editor at Dark Reading,  9/22/2020
Register for Dark Reading Newsletters
White Papers
Current Issue
Special Report: Computing's New Normal
This special report examines how IT security organizations have adapted to the "new normal" of computing and what the long-term effects will be. Read it and get a unique set of perspectives on issues ranging from new threats & vulnerabilities as a result of remote working to how enterprise security strategy will be affected long term.
Flash Poll
How IT Security Organizations are Attacking the Cybersecurity Problem
How IT Security Organizations are Attacking the Cybersecurity Problem
The COVID-19 pandemic turned the world -- and enterprise computing -- on end. Here's a look at how cybersecurity teams are retrenching their defense strategies, rebuilding their teams, and selecting new technologies to stop the oncoming rise of online attacks.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, when determining the common dimension size of two tensors, TFLite uses a `DCHECK` which is no-op outside of debug compilation modes. Since the function always returns the dimension of the first tensor, malicious attackers can ...
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, a crafted TFLite model can force a node to have as input a tensor backed by a `nullptr` buffer. This can be achieved by changing a buffer index in the flatbuffer serialization to convert a read-only tensor to a read-write one....
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, if a TFLite saved model uses the same tensor as both input and output of an operator, then, depending on the operator, we can observe a segmentation fault or just memory corruption. We have patched the issue in d58c96946b and ...
PUBLISHED: 2020-09-25
In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices f...
PUBLISHED: 2020-09-25
In TensorFlow Lite before versions 2.2.1 and 2.3.1, models using segment sum can trigger writes outside of bounds of heap allocated buffers by inserting negative elements in the segment ids tensor. Users having access to `segment_ids_data` can alter `output_index` and then write to outside of `outpu...