01:54 PM
George Crump
George Crump

What Business Data Should Be In The Cloud?

In our last entry we discussed different ways that you can move data into the cloud, something I call onramps. In theory the ability now exists to put all your data types on a cloud storage platform, but is that the right choice for your business? How do you determine which data you should put in the cloud?

In our last entry we discussed different ways that you can move data into the cloud, something I call onramps. In theory the ability now exists to put all your data types on a cloud storage platform, but is that the right choice for your business? How do you determine which data you should put in the cloud?The answer, as we will discuss in our upcoming webinar, "What's Your Cloud Storage Strategy", like almost everything else in IT, is it depends. It depends on what your key internal storage challenges are and what the internal resistance to using an external service might be. Notice that not included in that discussion is what is the size of your company, the amount of IT resources you have nor the amount of data that you have. While I find that it is often assumed that cloud storage is for small business owners only, there are cloud storage solutions for businesses of all sizes including large enterprises.

The first area to examine is how much data is being accessed on a moment by moment basis. As you may have noticed from the discussion in our last entry there is an onramp or cloud gateway for almost every data type now, ranging from backups to primary block storage. The moment by moment change rate plus the data type will determine how large the local gateway cache will need to be and how often data will need to be recalled from the cloud. The total size of the data set is for the most part irrelevant, other than the GB cost to store it but that cost should be relatively static. The movement of data from your local cache from the cloud will be what delays an application. The more often that data can be served from local cache either through smart caching algorithms or large cache space the better. Also several cloud storage providers charge extra for the transfer out of the cloud back to local storage, so it can lead to a surprise on your bill. Since most onramps or gateways give you a choice of provider it makes sense to know what the hidden extras are from each provider.

The impact of restoring data back from the cloud and its potential extra costs is one of the reasons that backup and archive data have been so popular. The transfer is almost always one way; upload. Also most big recoveries can happen from the local cache and don't need the data stored on the cloud. The backup copy in the cloud mostly serves as a long term retention area. As you move into using cloud storage for primary data the transfer issues become a bit more thorny. The easiest data set use case to deal with is the file share use case. Most files on a file server are only active for a few days and then become dormant. This is an ideal use case for cloud storage, let the older files migrate to the cloud. Even if they do need to be recalled from cloud storage later only a single user is typically impacted by the delay in access, and a single file access is relatively fast.

Databases become a bit more tricky. Here look for applications that have a small portion of the application that is accessed on a regular basis. Microsoft SharePoint is a good example of a "ready for cloud now" data set and potentially some mail systems that store attachments and messages as discrete files. In the near future don't rule out busy transaction oriented databases. As the developers of these platforms embrace the availability of cloud storage they can build in ways to auto-segment off tier sections of data so that it can be stored on different storage types automatically and the cloud could be one of those types.

The second common decision point is the initial load in of data into the cloud. How do you get it all there? That will be the focus of our next entry.

Track us on Twitter:

Subscribe to our RSS feed.

George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.

Comment  | 
Print  | 
More Insights
Register for Dark Reading Newsletters
White Papers
Current Issue
E-Commerce Security: What Every Enterprise Needs to Know
The mainstream use of EMV smartcards in the US has experts predicting an increase in online fraud. Organizations will need to look at new tools and processes for building better breach detection and response capabilities.
Flash Poll
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
Published: 2015-10-15
The Direct Rendering Manager (DRM) subsystem in the Linux kernel through 4.x mishandles requests for Graphics Execution Manager (GEM) objects, which allows context-dependent attackers to cause a denial of service (memory consumption) via an application that processes graphics data, as demonstrated b...

Published: 2015-10-15
netstat in IBM AIX 5.3, 6.1, and 7.1 and VIOS 2.2.x, when a fibre channel adapter is used, allows local users to gain privileges via unspecified vectors.

Published: 2015-10-15
Cross-site request forgery (CSRF) vulnerability in eXtplorer before 2.1.8 allows remote attackers to hijack the authentication of arbitrary users for requests that execute PHP code.

Published: 2015-10-15
Directory traversal vulnerability in QNAP QTS before 4.1.4 build 0910 and 4.2.x before 4.2.0 RC2 build 0910, when AFP is enabled, allows remote attackers to read or write to arbitrary files by leveraging access to an OS X (1) user or (2) guest account.

Published: 2015-10-15
Cisco Application Policy Infrastructure Controller (APIC) 1.1j allows local users to gain privileges via vectors involving addition of an SSH key, aka Bug ID CSCuw46076.

Dark Reading Radio