News
1/9/2013
00:00 AM
Bill Kleyman
Bill Kleyman
Commentary
Connect Directly
Facebook
Twitter
LinkedIn
Google+
RSS
E-Mail
50%
50%

When Cloud Computing Is The Wrong Fit

ROI is the first question to answer when deciding if cloud computing is a good platform for your enterprise. Three others involve compliance, infrastructure, and a strong business case.

I’m a big fan of cloud computing and always enjoy seeing it done right. This means good planning, a solid infrastructure, and a use-case that directly fits what the cloud can deliver.

Today, almost every organization that uses the Internet is utilizing some element of cloud computing. The differentiator is the cloud model and the extent to which that model is deployed. In my experience there are instances where a particular cloud platform is a great fit. On the other hand, some companies absolutely do not need this type of solution.

One of the most important first steps to take when you are deciding whether or not to adopt a cloud platform is to establish a solid use-case that can generate ROI. From there, you should look at the investment your company will need to make. In some cases, migrating to the cloud just won’t make sense. Here are three examples.

1. Compliance and regulations
Unless you are planning a very secure private cloud solution, many cloud computing platforms will leverage some element of a public Internet infrastructure. This might mean sharing bandwidth or utilizing a datacenter to host your solution.

Think twice about the cloud if you’re in an industry heavily monitored by compliance rules and regulations. Only a handful of enterprise datacenters are able to manage PCI compliance for organizations that use them as hosts. Also, PCI compliance may come at a high cost. Remember to always take regulations into account prior to committing to a cloud provider.

2. Infrastructure
In some cases, the business plan is there, but the environment is not. A good cloud solution often means using pieces from storage, LAN/WAN, servers, virtualization, user control, and putting them all together. If some part isn’t there or something isn’t ready to handle this type of new load, there’s a good chance you’ll experience some performance degradation.

This situation is where analyzing ROI and the actual business investment is critical. Be sure to ask key questions like how much additional hardware will you need to buy and whether it actually makes sense to host infrastructure off site. What’s more, having infrastructure doesn’t only limit you to hardware. You also have to have the right people to support your cloud environment. This means employing engineers who are cloud-ready and managers who understand the vision of their cloud model.

3. Poor business-case
Developing a strong business case means identifying a set of challenges and finding a way to overcome problems with an intelligent piece of technology. Unfortunately, unexpected events can slow down the cloud migration process and cost companies a lot of money.

To avoid a cloud budget-buster, it’s important to develop a business-case that utilizes technology that will perform for current and future needs. That means that datacenter managers and architects have to consider how their business will evolve and be flexible and forward-thinking in developing a cloud strategy. For example, if administrators provision hardware that can’t support users after a year or so, it’s quite possible that the initial planning was flawed, and the results will be disastrous.

Like any technology, cloud computing starts with a well-conceived plan and an infrastructure that will endure. Processes like testing, maintenance, business continuity, and even personnel training are all very important to weigh when considering the pluses and minuses of migrating to the cloud. With the right model in place and a good infrastructure, the cloud can be a powerful platform to leverage. However, with the wrong mindset and a poorly planned deployment, a cloud model can quickly become a cash-drain.

This article originally appeared in The Transformed Datacenter on 1/9/2013.

Comment  | 
Print  | 
More Insights
Register for Dark Reading Newsletters
White Papers
Flash Poll
Current Issue
Cartoon
DevOps’ Impact on Application Security
DevOps’ Impact on Application Security
Managing the interdependency between software and infrastructure is a thorny challenge. Often, it’s a “developers are from Mars, systems engineers are from Venus” situation.
Video
Slideshows
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2014-0103
Published: 2014-07-29
WebAccess in Zarafa before 7.1.10 and WebApp before 1.6 stores credentials in cleartext, which allows local Apache users to obtain sensitive information by reading the PHP session files.

CVE-2014-0475
Published: 2014-07-29
Multiple directory traversal vulnerabilities in GNU C Library (aka glibc or libc6) before 2.20 allow context-dependent attackers to bypass ForceCommand restrictions and possibly have other unspecified impact via a .. (dot dot) in a (1) LC_*, (2) LANG, or other locale environment variable.

CVE-2014-0889
Published: 2014-07-29
Multiple cross-site scripting (XSS) vulnerabilities in IBM Atlas Suite (aka Atlas Policy Suite), as used in Atlas eDiscovery Process Management through 6.0.3, Disposal and Governance Management for IT through 6.0.3, and Global Retention Policy and Schedule Management through 6.0.3, allow remote atta...

CVE-2014-2226
Published: 2014-07-29
Ubiquiti UniFi Controller before 3.2.1 logs the administrative password hash in syslog messages, which allows man-in-the-middle attackers to obtains sensitive information via unspecified vectors.

CVE-2014-3020
Published: 2014-07-29
install.sh in the Embedded WebSphere Application Server (eWAS) 7.0 before FP33 in IBM Tivoli Integrated Portal (TIP) 2.1 and 2.2 sets world-writable permissions for the installRoot directory tree, which allows local users to gain privileges via a Trojan horse program.

Best of the Web
Dark Reading Radio