Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Operations

11/3/2016
11:30 AM
Jeff Schilling
Jeff Schilling
Commentary
Connect Directly
Facebook
Twitter
LinkedIn
RSS
E-Mail vvv
50%
50%

Managing Multi-Cloud Security ‘Whether You Want to or Not’

Yes, it is possible to orchestrate security across multiple clouds without creating performance hurdles. Here's how.

In my experience, many conversations with customer security teams inevitably begin with: “I just found out that one of our business owners built infrastructure in the public cloud and it is hosting a critical business process.”  Or, “we can’t afford the tech refresh in my current datacenter, and I have been directed to manage a multi-year migration plan to the public cloud.” Hence the headline of this piece “…whether you want to or not.”

Hybrid cloud security management is one of the popular industry trends that has seen a plethora of service offerings, all proclaiming to provide the “single pane of glass” to visualize security posture across datacenters. However, if there isn’t a plan for orchestrating security across multiple clouds, there will inherently be a collection of disparate data that will make management difficult. 

For those fortunate enough to have a centralized datacenter managed in-house, the reality is that while those days are most likely numbered it might already be too late to embrace this new reality and plan accordingly. Before succumbing to this “want to or not” category, it is not as difficult as one might think to develop a roadmap for managing multi-cloud security on one’s own terms.

Data classification
The first step in managing this divergent landscape is to classify datacenter environments into low, medium and high risk. This allows for proactive management of the type of data to send to each cloud option. The breakdown should include:

  • Low-risk environments (e.g. static marketing webpages) are great candidates for public cloud offerings with a few security controls to detect an infected server.
  • Medium-risk environments (e.g. dev environment, collaborations systems) require protection, but likely will benefit from the agility and cost of public cloud options. This classification should have a managed security solution that pulls security telemetry into a Security Incident Event Management (SIEM) tool or a third-party security monitoring tool.
  • High-risk environments (e.g. payment card data, personal healthcare information) should be protected and require auditable security controls that must be maintained.  This could be conducted in the public cloud; however, most organizations choose to keep them within internal IT infrastructure or host in a secure private cloud. This is normally the last environment to move to the cloud and is likely the bottleneck to keep from putting all environments in one location. 

Setting the Security Framework
Once an approach to distribute workloads to various cloud options is determined, the next step is to define the security framework and tools to leverage in each environment. It is advisable to organize security controls into high-level buckets in accordance with the compliance frameworks being used (e.g. NIST 800-53, ISO, PCI), then try to standardize the tools you use to implement those controls. 

For example, consider the tools for network inspection (layer 3/layer 4) and application inspection (layer 7), network segmentation, configuration control, endpoint detection/remediation. Whenever possible, the same tools should be used across multiple clouds. If this isn’t reasonable, try to ensure the log output from those tools can be consumed and visualized with a correlation tool or SIEM.

The final step to building a multi-cloud security platform is to create the logging infrastructure that allows all of the information to flow into the proverbial “single pane of glass” to manage multi-cloud security. The critical aspect in this situation is to settle on a single logging standard (e.g. Syslog, JSON) then convert where necessary to integrate in a visualization tool or correlation engine. This is where many security teams choose to use a third-party tool or management portal to offload this demanding architecture design task on an outside group. 

Once these steps are in place, building a sound roadmap to a secure multi-cloud environment becomes far more manageable. A solid plan will also be able to sustain additional growth and ensure that the ROI that the cloud offers is fully realized. The ultimate goal is to have seamless security without creating performance hurdles. Being proactive and thinking big picture is a huge first step.

Related Content:

Black Hat Europe 2016 is coming to London's Business Design Centre November 1 through 4. Click for information on the briefing schedule and to register.

Jeff Schilling, a retired U.S. Army colonel, is Armor's chief security officer. He is responsible for the cyber and physical security programs for the corporate environment and customer-focused capabilities. His areas of responsibilities include security operation, governance ... View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Why Cyber-Risk Is a C-Suite Issue
Marc Wilczek, Digital Strategist & CIO Advisor,  11/12/2019
Black Hat Q&A: Hacking a '90s Sports Car
Black Hat Staff, ,  11/7/2019
The Cold Truth about Cyber Insurance
Chris Kennedy, CISO & VP Customer Success, AttackIQ,  11/7/2019
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Current Issue
7 Threats & Disruptive Forces Changing the Face of Cybersecurity
This Dark Reading Tech Digest gives an in-depth look at the biggest emerging threats and disruptive forces that are changing the face of cybersecurity today.
Flash Poll
Rethinking Enterprise Data Defense
Rethinking Enterprise Data Defense
Frustrated with recurring intrusions and breaches, cybersecurity professionals are questioning some of the industry’s conventional wisdom. Here’s a look at what they’re thinking about.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2019-18954
PUBLISHED: 2019-11-14
Pomelo v2.2.5 allows external control of critical state data. A malicious user input can corrupt arbitrary methods and attributes in template/game-server/app/servers/connector/handler/entryHandler.js because certain internal attributes can be overwritten via a conflicting name. Hence, a malicious at...
CVE-2019-3640
PUBLISHED: 2019-11-14
Unprotected Transport of Credentials in ePO extension in McAfee Data Loss Prevention 11.x prior to 11.4.0 allows remote attackers with access to the network to collect login details to the LDAP server via the ePO extension not using a secure connection when testing LDAP connectivity.
CVE-2019-3661
PUBLISHED: 2019-11-14
Improper Neutralization of Special Elements used in an SQL Command ('SQL Injection') in McAfee Advanced Threat Defense (ATD) prior to 4.8 allows remote authenticated attacker to execute database commands via carefully constructed time based payloads.
CVE-2019-3662
PUBLISHED: 2019-11-14
Path Traversal: '/absolute/pathname/here' vulnerability in McAfee Advanced Threat Defense (ATD) prior to 4.8 allows remote authenticated attacker to gain unintended access to files on the system via carefully constructed HTTP requests.
CVE-2019-3663
PUBLISHED: 2019-11-14
Unprotected Storage of Credentials vulnerability in McAfee Advanced Threat Defense (ATD) prior to 4.8 allows local attacker to gain access to the root password via accessing sensitive files on the system.