Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.


10:30 AM
Connect Directly
E-Mail vvv

Putting Security on Par with DevOps

Inside the cloud, innovation and automation shouldn't take a toll on protection.

DevSecOps: It's not a very friendly acronym. It reeks of techno-babble, sounds a little military, and resists a consumer connection. But think again. This is a vital discipline that's directly relevant to every enterprise and every individual, particularly within cloud infrastructures, and has long deserved greater attention.

Maybe that's why we're now seeing greater research and more discussion devoted to the subject. But what's really at stake here? And what needs to happen next?

First, let's understand the context. Cloud computing has transformed the way organizations create and manage digital services, and that includes a big change in how software is developed and deployed. DevOps was designed to break down silos among development, quality assurance and IT operations, and speed innovation in the process. This meant teams outside the IT orbit took control, and the always-on public cloud certainly helped.

But there was one little hiccup — as lines around ownership and accountability got blurred, security got left behind. Flexibility, yes; competitive advantage, sure; innovation, absolutely. Protection? Not so much. So, moving forward, here's a blueprint for gaining security without compromising productivity.

DevSecOps is nothing more — and nothing less — than the process of uniting the two main stakeholders, DevOps and security, in a spirit of collaboration. Many organizations have multiple DevOps teams, especially with multiple business units. That's why it's important for the security practice to own the cloud security program, which can encompass uniform monitoring and central visibility across all public cloud environments.

Another obstacle here is that DevOps is heavily automated, which is a good thing, while many aspects of traditional security involve manual audits. If DevSecOps is to work, security must be similarly automated, but professionals in this field worry that this will give rise to endless alerts. However, there have been major advances, and solutions are available to implement a fully automated security workflow that not only detects alerts but greatly eases the immediate resolution of key issues.

With that as the foundation, here are some best practices to build upon.

Automatic Discovery
Public cloud environments are constantly changing — that's actually a major advantage —and it's not feasible to manually audit the entire landscape for assets.

  • Resource discovery: Discover cloud resources as soon as they're created, modified, or terminated. An API-based approach for automated discovery is more scalable than an agent-based approach; some types of cloud resources don't allow agents to be installed, which creates blind spots. This is especially important as organizations increasingly adopt serverless computing (e.g., AWS Lambda).
  • Application profiling: Discover which applications are running on the hosts to better assess risk. For example, knowing that a publicly accessible host is running MongoDB software with a known vulnerability indicates higher risk than, for example, a publicly accessible web server with no vulnerabilities.

Automatic Threat Detection
The threat vectors in public cloud environments are the same as those in on-premises environments, but the approaches to detecting them are different.

  • Risky configurations: Establish baseline configurations for cloud resources based on industry standards such as CIS, NIST, or PCI and automatically flag any deviations. For instance, an alert should be triggered if a user exposes a cloud storage service to the public.
  • Vulnerable hosts: Correlate feeds from third-party vulnerability management tools with cloud data sets such as configurations, network traffic, etc. This helps pinpoint vulnerable hosts within an environment and offers the opportunity to prioritize the hosts for patching based on the severity of the risk. For example, it's more important to patch hosts that are exposed to the Internet because they're easier to exploit.
  • Suspicious user activities: Baseline each user's activity to establish "normal" behavior, which makes it easier to spot anomalous patterns. This will highlight threats such as intrusions via compromised user accounts, or even insiders acting maliciously.
  • Network intrusions: Correlate network traffic data with data from your public cloud environment and third-party threat intelligence to detect suspicious activities. This detects threats such as cryptojacking, where attackers use organizations' computing power to generate cryptocurrency.

Automatic Response
Once any risks are detected, they need automatic or immediate remediation.

  • User attribution: Instead of inundating security teams with more alerts (they suffer from alert fatigue anyway), these should be routed directly to the responsible user. Besides dealing with the problem itself, this cuts down on unnecessary communication between security and DevOps. To be clear, this only works if the system can identify the responsible user, which requires an audit trail of user activities.
  • Contextual alerts: Alerts must provide enough context to help the responsible user understand the risk and take appropriate action. For example, a security group that's open is a problem, but not necessarily the highest risk. By contrast, knowing that an open security group is associated with a database that's receiving traffic from a suspicious IP address definitely is a high risk and needs immediate remediation.
  • Workflow integration: The alerts should be automatically sent to workflow management tools for further investigation or orchestration of the fix. This enables organizations to leverage existing workflows and playbooks.

Again, the fact that DevOps has crashed barriers and demolished silos, all to speed development and deployment, is a good thing. It's time that security kept pace — and the tools to do that are now available.

Related Content:


Black Hat Europe returns to London Dec. 3-6, 2018, with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

Allan Kristensen, Vice President of Solutions Engineering at RedLock, is a technology leader who embraces a customer-first approach to build and grow emerging technologies into market leaders. He has over 15 years of experience in building successful solutions engineering ... View Full Bio

Recommended Reading:

Comment  | 
Print  | 
More Insights
Oldest First  |  Newest First  |  Threaded View
User Rank: Strategist
10/3/2018 | 11:49:34 AM
Identity, Identity, Identity
Identity, especially non-human identities (not just the ID), must be understood.  Inventory matters, especially where assets in the cloud are more and more ephemeral and transient than ever.  Who is accessing/doing what, when are they doing it, why are they doing it, where are they doing it?  Are they authorized?  How would a threat actor compromise the identity of an asset, be it the database owner service ID, root on a server, masquerade as a trusted API interface, etc?  The concept of the right person (or non-person entity) being able to do the right thing, in the right place, at the right time, and for the right reasons is where we must start.  Along with that, the converse is important (the WRONG person/thing being able to do it).  A threat actor, especially a well-informed internal threat actor, understands the weak links in this chain and WILL exploit them.  Passwords created and known by human beings make the attacks easy.  Reference Equifax - it wasn't the Struts vulnerability; rather it was too easy to get the administrator credentials to over 50 databases once they climbed in through the bedroom window.  Humans are provably fallible in this regard.  It's not that you don't trust your privileged users, it's that you cannot.  They pick horrible passwords, share them extensively, and rarely, if ever, change them.  That grumbly Linux systems administrator who left the organization a year and a half ago still knows the passwords.  Imagine his or her ability to damage your company if so motivated.

Patching known vulnerabilities matters, monitoring for zero-day indicators matters, anti-virus/anti-malware protection matters.  But at some point, we just run out of fingers to put in the dyke.  Any organization that isn't managing identity well and taking it very seriously is whistling past the graveyard.
User Rank: Author
10/5/2018 | 9:56:27 PM
Re: Identity, Identity, Identity
Identity management is definitely a key part of building a secure public cloud infrastructure, because it's critical to control WHO can log into your public cloud environments as well as being able to ENFORCE secure login methods etc. 

However, the reality is that public cloud configuration drifts continue to happen, why it's equally important to have ongoing user credential configuration monitoring in place to avoid identity configuration drifts, such as MFA authentication not enforced, access keys not rotated, password policies not enforced, unused access keys / user ID's etc.

Furthermore, it's critical to have a solid, and automated user anomaly detection system in place to be able to distinguish between normal and unusual user activities. Public cloud environments are highly dynamic environments, and in today's world suspicious and unusual user activities, such as spinning up computing power for crypto mining activities using a compromised access key, can easily fly under the radar if User Behavior Analytics are not used efficiently to detect and alert on suspicious user activities.

The same applies to vulnerable hosts in public cloud environments. It's critical to have efficient tools in place to detect and alert on vulnerable hosts, but it's even more important to be able to correlate this information with other threat factors such as determining which vulnerable hosts has received traffic from suspicious IP addresses, and for those quickly determine the change history (what was changed and by whom). Correlated information like this will help security teams prioritize and react accordingly.

Last but not least auto-remediation capabilities are important as well, because especially for identity configuration drifts it's critical to be able to respond immediately, and without human involvement.


User Rank: Apprentice
4/8/2019 | 1:23:58 AM
The technology field, especially DevOps consulting requires an immense professional skill set that equates to relative good job security, it's sites like this that are helping to fill in the gaps.
Ransomware Is Not the Problem
Adam Shostack, Consultant, Entrepreneur, Technologist, Game Designer,  6/9/2021
How Can I Test the Security of My Home-Office Employees' Routers?
John Bock, Senior Research Scientist,  6/7/2021
New Ransomware Group Claiming Connection to REvil Gang Surfaces
Jai Vijayan, Contributing Writer,  6/10/2021
Register for Dark Reading Newsletters
White Papers
Cartoon Contest
Write a Caption, Win an Amazon Gift Card! Click Here
Latest Comment: Zero Trust doesn't have to break your budget!
Current Issue
The State of Cybersecurity Incident Response
In this report learn how enterprises are building their incident response teams and processes, how they research potential compromises, how they respond to new breaches, and what tools and processes they use to remediate problems and improve their cyber defenses for the future.
Flash Poll
How Enterprises are Developing Secure Applications
How Enterprises are Developing Secure Applications
Recent breaches of third-party apps are driving many organizations to think harder about the security of their off-the-shelf software as they continue to move left in secure software development practices.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
PUBLISHED: 2021-06-18
Use of hard-coded credentials vulnerability in php component in Synology Calendar before 2.4.0-0761 allows remote attackers to obtain sensitive information via unspecified vectors.
PUBLISHED: 2021-06-18
Server-Side Request Forgery (SSRF) vulnerability in cgi component in Synology Media Server before 1.8.3-2881 allows remote attackers to access intranet resources via unspecified vectors.
PUBLISHED: 2021-06-18
Improper neutralization of special elements used in a command ('Command Injection') vulnerability in task management component in Synology Download Station before 3.8.16-3566 allows remote authenticated users to execute arbitrary code via unspecified vectors.
PUBLISHED: 2021-06-18
Improper privilege management vulnerability in cgi component in Synology Download Station before 3.8.16-3566 allows remote authenticated users to execute arbitrary code via unspecified vectors.
PUBLISHED: 2021-06-18
Server-Side Request Forgery (SSRF) vulnerability in task management component in Synology Download Station before 3.8.16-3566 allows remote authenticated users to access intranet resources via unspecified vectors.