Cloud

10/3/2018
10:30 AM
Connect Directly
LinkedIn
RSS
E-Mail vvv
100%
0%

Putting Security on Par with DevOps

Inside the cloud, innovation and automation shouldn't take a toll on protection.

DevSecOps: It's not a very friendly acronym. It reeks of techno-babble, sounds a little military, and resists a consumer connection. But think again. This is a vital discipline that's directly relevant to every enterprise and every individual, particularly within cloud infrastructures, and has long deserved greater attention.

Maybe that's why we're now seeing greater research and more discussion devoted to the subject. But what's really at stake here? And what needs to happen next?

First, let's understand the context. Cloud computing has transformed the way organizations create and manage digital services, and that includes a big change in how software is developed and deployed. DevOps was designed to break down silos among development, quality assurance and IT operations, and speed innovation in the process. This meant teams outside the IT orbit took control, and the always-on public cloud certainly helped.

But there was one little hiccup — as lines around ownership and accountability got blurred, security got left behind. Flexibility, yes; competitive advantage, sure; innovation, absolutely. Protection? Not so much. So, moving forward, here's a blueprint for gaining security without compromising productivity.

DevSecOps is nothing more — and nothing less — than the process of uniting the two main stakeholders, DevOps and security, in a spirit of collaboration. Many organizations have multiple DevOps teams, especially with multiple business units. That's why it's important for the security practice to own the cloud security program, which can encompass uniform monitoring and central visibility across all public cloud environments.

Another obstacle here is that DevOps is heavily automated, which is a good thing, while many aspects of traditional security involve manual audits. If DevSecOps is to work, security must be similarly automated, but professionals in this field worry that this will give rise to endless alerts. However, there have been major advances, and solutions are available to implement a fully automated security workflow that not only detects alerts but greatly eases the immediate resolution of key issues.

With that as the foundation, here are some best practices to build upon.

Automatic Discovery
Public cloud environments are constantly changing — that's actually a major advantage —and it's not feasible to manually audit the entire landscape for assets.

  • Resource discovery: Discover cloud resources as soon as they're created, modified, or terminated. An API-based approach for automated discovery is more scalable than an agent-based approach; some types of cloud resources don't allow agents to be installed, which creates blind spots. This is especially important as organizations increasingly adopt serverless computing (e.g., AWS Lambda).
  • Application profiling: Discover which applications are running on the hosts to better assess risk. For example, knowing that a publicly accessible host is running MongoDB software with a known vulnerability indicates higher risk than, for example, a publicly accessible web server with no vulnerabilities.

Automatic Threat Detection
The threat vectors in public cloud environments are the same as those in on-premises environments, but the approaches to detecting them are different.

  • Risky configurations: Establish baseline configurations for cloud resources based on industry standards such as CIS, NIST, or PCI and automatically flag any deviations. For instance, an alert should be triggered if a user exposes a cloud storage service to the public.
  • Vulnerable hosts: Correlate feeds from third-party vulnerability management tools with cloud data sets such as configurations, network traffic, etc. This helps pinpoint vulnerable hosts within an environment and offers the opportunity to prioritize the hosts for patching based on the severity of the risk. For example, it's more important to patch hosts that are exposed to the Internet because they're easier to exploit.
  • Suspicious user activities: Baseline each user's activity to establish "normal" behavior, which makes it easier to spot anomalous patterns. This will highlight threats such as intrusions via compromised user accounts, or even insiders acting maliciously.
  • Network intrusions: Correlate network traffic data with data from your public cloud environment and third-party threat intelligence to detect suspicious activities. This detects threats such as cryptojacking, where attackers use organizations' computing power to generate cryptocurrency.

Automatic Response
Once any risks are detected, they need automatic or immediate remediation.

  • User attribution: Instead of inundating security teams with more alerts (they suffer from alert fatigue anyway), these should be routed directly to the responsible user. Besides dealing with the problem itself, this cuts down on unnecessary communication between security and DevOps. To be clear, this only works if the system can identify the responsible user, which requires an audit trail of user activities.
  • Contextual alerts: Alerts must provide enough context to help the responsible user understand the risk and take appropriate action. For example, a security group that's open is a problem, but not necessarily the highest risk. By contrast, knowing that an open security group is associated with a database that's receiving traffic from a suspicious IP address definitely is a high risk and needs immediate remediation.
  • Workflow integration: The alerts should be automatically sent to workflow management tools for further investigation or orchestration of the fix. This enables organizations to leverage existing workflows and playbooks.

Again, the fact that DevOps has crashed barriers and demolished silos, all to speed development and deployment, is a good thing. It's time that security kept pace — and the tools to do that are now available.

Related Content:

 

Black Hat Europe returns to London Dec. 3-6, 2018, with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

Allan Kristensen, Vice President of Solutions Engineering at RedLock, is a technology leader who embraces a customer-first approach to build and grow emerging technologies into market leaders. He has over 15 years of experience in building successful solutions engineering ... View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
allankristensen
50%
50%
allankristensen,
User Rank: Author
10/5/2018 | 9:56:27 PM
Re: Identity, Identity, Identity
Identity management is definitely a key part of building a secure public cloud infrastructure, because it's critical to control WHO can log into your public cloud environments as well as being able to ENFORCE secure login methods etc. 

However, the reality is that public cloud configuration drifts continue to happen, why it's equally important to have ongoing user credential configuration monitoring in place to avoid identity configuration drifts, such as MFA authentication not enforced, access keys not rotated, password policies not enforced, unused access keys / user ID's etc.

Furthermore, it's critical to have a solid, and automated user anomaly detection system in place to be able to distinguish between normal and unusual user activities. Public cloud environments are highly dynamic environments, and in today's world suspicious and unusual user activities, such as spinning up computing power for crypto mining activities using a compromised access key, can easily fly under the radar if User Behavior Analytics are not used efficiently to detect and alert on suspicious user activities.

The same applies to vulnerable hosts in public cloud environments. It's critical to have efficient tools in place to detect and alert on vulnerable hosts, but it's even more important to be able to correlate this information with other threat factors such as determining which vulnerable hosts has received traffic from suspicious IP addresses, and for those quickly determine the change history (what was changed and by whom). Correlated information like this will help security teams prioritize and react accordingly.

Last but not least auto-remediation capabilities are important as well, because especially for identity configuration drifts it's critical to be able to respond immediately, and without human involvement.

 

 
lunny
50%
50%
lunny,
User Rank: Strategist
10/3/2018 | 11:49:34 AM
Identity, Identity, Identity
Identity, especially non-human identities (not just the ID), must be understood.  Inventory matters, especially where assets in the cloud are more and more ephemeral and transient than ever.  Who is accessing/doing what, when are they doing it, why are they doing it, where are they doing it?  Are they authorized?  How would a threat actor compromise the identity of an asset, be it the database owner service ID, root on a server, masquerade as a trusted API interface, etc?  The concept of the right person (or non-person entity) being able to do the right thing, in the right place, at the right time, and for the right reasons is where we must start.  Along with that, the converse is important (the WRONG person/thing being able to do it).  A threat actor, especially a well-informed internal threat actor, understands the weak links in this chain and WILL exploit them.  Passwords created and known by human beings make the attacks easy.  Reference Equifax - it wasn't the Struts vulnerability; rather it was too easy to get the administrator credentials to over 50 databases once they climbed in through the bedroom window.  Humans are provably fallible in this regard.  It's not that you don't trust your privileged users, it's that you cannot.  They pick horrible passwords, share them extensively, and rarely, if ever, change them.  That grumbly Linux systems administrator who left the organization a year and a half ago still knows the passwords.  Imagine his or her ability to damage your company if so motivated.

Patching known vulnerabilities matters, monitoring for zero-day indicators matters, anti-virus/anti-malware protection matters.  But at some point, we just run out of fingers to put in the dyke.  Any organization that isn't managing identity well and taking it very seriously is whistling past the graveyard.
6 Security Trends for 2018/2019
Curtis Franklin Jr., Senior Editor at Dark Reading,  10/15/2018
6 Reasons Why Employees Violate Security Policies
Ericka Chickowski, Contributing Writer, Dark Reading,  10/16/2018
Getting Up to Speed with "Always-On SSL"
Tim Callan, Senior Fellow, Comodo CA,  10/18/2018
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Latest Comment: Too funny!
Current Issue
Flash Poll
The Risk Management Struggle
The Risk Management Struggle
The majority of organizations are struggling to implement a risk-based approach to security even though risk reduction has become the primary metric for measuring the effectiveness of enterprise security strategies. Read the report and get more details today!
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2018-10839
PUBLISHED: 2018-10-16
Qemu emulator <= 3.0.0 built with the NE2000 NIC emulation support is vulnerable to an integer overflow, which could lead to buffer overflow issue. It could occur when receiving packets over the network. A user inside guest could use this flaw to crash the Qemu process resulting in DoS.
CVE-2018-13399
PUBLISHED: 2018-10-16
The Microsoft Windows Installer for Atlassian Fisheye and Crucible before version 4.6.1 allows local attackers to escalate privileges because of weak permissions on the installation directory.
CVE-2018-18381
PUBLISHED: 2018-10-16
Z-BlogPHP 1.5.2.1935 (Zero) has a stored XSS Vulnerability in zb_system/function/c_system_admin.php via the Content-Type header during the uploading of image attachments.
CVE-2018-18382
PUBLISHED: 2018-10-16
Advanced HRM 1.6 allows Remote Code Execution via PHP code in a .php file to the user/update-user-avatar URI, which can be accessed through an "Update Profile" "Change Picture" (aka user/edit-profile) action.
CVE-2018-18374
PUBLISHED: 2018-10-16
XSS exists in the MetInfo 6.1.2 admin/index.php page via the anyid parameter.