Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Cloud

7/15/2019
10:00 AM
Pawan Shankar
Pawan Shankar
Commentary
Connect Directly
LinkedIn
RSS
E-Mail vvv
100%
0%

Is Machine Learning the Future of Cloud-Native Security?

The nature of containers and microservices makes them harder to protect. Machine learning might be the answer going forward.

Cloud-native architectures help businesses reduce application development time and increase agility, at a lower cost. Although flexibility and portability are key drivers for adoption, a cloud-native structure brings with it a new challenge: managing security and performance at scale. 

Challenges in the Cloud
The nature of containers and microservices makes it harder to protect them in these ways:

1. They have a dissolved perimeter, meaning that once a traditional perimeter is breached, lateral movement of attacks (such as malware or ransomware) often goes undetected across data centers and/or cloud environments.

2. With a DevOps mindset, developers are continuously building, pushing, and pulling images from various registries, leaving the door open for various exposures, whether they are operating system vulnerabilities, package vulnerabilities, misconfigurations, or exposed secrets.

3. The ephemeral and opaque nature of containers leaves a massive amount of data in its wake, making visibility into the risk and security posture of the containerized environment extremely complicated. Sorting through interconnected data from thousands of services across millions of short-lived containers to understand a specific security or compliance violation in time is akin to finding a needle in a haystack.

4. With increased development speeds, security is being pushed later in the development cycle. Developers are failing to bake security in early, opting instead to add it on at the end, and ultimately, they are increasing the chance of potential exposures in the infrastructure.

With tight budgets and the pressure to constantly innovate, machine learning (ML) and AIOps — that is, artificial intelligence for IT operations — are increasingly being built into security vendor road maps because it is the most realistic solution to decrease the burden on security professionals in modern architectures, at least at this point.

What Makes ML a Good Fit?
As containers are constantly being spun up and down on demand, there is no margin of error for security. An attacker has to be successful just once, and this is much easier in a cloud-native environment that is constantly evolving, especially as security struggles to keep up. This means runtime environments can now be compromised due to insider hacks, policy misconfigurations, zero-day threats, and/or external attacks.

It is hard for a resource-starved security team to manually secure against these threats, at scale, in this dynamic environment. It may take hours or days before a security profile is adjusted, which is plenty of time for a hacker to exploit this window of opportunity.

Over the last few decades, we have witnessed tremendous progress in ML algorithms and techniques. It has now become possible for individuals who do not necessarily have a statistical background to take models and apply them to various problems.

Containers are a good fit for supervised learning models for the following reasons:

1. Containers have minimal surface area: Because containers are fundamentally designed for modular tasks and have smaller footprints, it is easier to define baseline activity inside and decide what is normal versus abnormal. In a virtual machine, there could be hundreds of binaries and processes running, but in a container, the number is far less.

2. Containers are declarative: Instead of looking at a random manifest, DevOps teams can look at the daemon and container environment to understand exactly what that specific container would be allowed to do at runtime.

3. Containers are immutable: The immutability factor serves as a theoretical guardrail to prevent changes at runtime. For example, if a container starts running netcat all of a sudden, that could be an indicator of a potential compromise.

Given these characteristics, ML models can learn from the behavior, enabling them to be more accurate when creating runtime profiles that assess what should be allowed versus not. By letting machines define pinpointed profiles and automatically spotting indicators of potential threat, it improves detection. This also alleviates some of the burnout among members of the security operations center team because they don't have to manually create specific rules for their different container environments, which helps them focus on the response and remediation rather than manual detection.

In this new world, security has to keep up with the ever-changing technology landscape. Teams must equip their cloud-native security tools to cut through noise and distractions, and find the insight they are looking for and need. Without ML, security teams find themselves stuck on details that don't matter and missing what does.

Related Content:

 

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Pawan Shankar has more than eight years of experience in enterprise networking and security. Previously, he worked for Cisco as an SE and a PM working with large enterprises on data center/cloud networking and security solutions. He also spent time at Dome9 (acquired by Check ... View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
howie.xu
100%
0%
howie.xu,
User Rank: Apprentice
8/29/2019 | 9:38:10 PM
Machine Learning helps because of the scale and dynamic nature.
.
7 Tips for Infosec Pros Considering A Lateral Career Move
Kelly Sheridan, Staff Editor, Dark Reading,  1/21/2020
For Mismanaged SOCs, The Price Is Not Right
Kelly Sheridan, Staff Editor, Dark Reading,  1/22/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Write a Caption, Win a Starbucks Card! Click Here
Latest Comment: This comment is waiting for review by our moderators.
Current Issue
IT 2020: A Look Ahead
Are you ready for the critical changes that will occur in 2020? We've compiled editor insights from the best of our network (Dark Reading, Data Center Knowledge, InformationWeek, ITPro Today and Network Computing) to deliver to you a look at the trends, technologies, and threats that are emerging in the coming year. Download it today!
Flash Poll
How Enterprises are Attacking the Cybersecurity Problem
How Enterprises are Attacking the Cybersecurity Problem
Organizations have invested in a sweeping array of security technologies to address challenges associated with the growing number of cybersecurity attacks. However, the complexity involved in managing these technologies is emerging as a major problem. Read this report to find out what your peers biggest security challenges are and the technologies they are using to address them.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-8003
PUBLISHED: 2020-01-27
A double-free vulnerability in vrend_renderer.c in virglrenderer through 0.8.1 allows attackers to cause a denial of service by triggering texture allocation failure, because vrend_renderer_resource_allocated_texture is not an appropriate place for a free.
CVE-2019-20427
PUBLISHED: 2020-01-27
In the Lustre file system before 2.12.3, the ptlrpc module has a buffer overflow and panic, and possibly remote code execution, due to the lack of validation for specific fields of packets sent by a client. Interaction between req_capsule_get_size and tgt_brw_write leads to a tgt_shortio2pages integ...
CVE-2019-20428
PUBLISHED: 2020-01-27
In the Lustre file system before 2.12.3, the ptlrpc module has an out-of-bounds read and panic due to the lack of validation for specific fields of packets sent by a client. The ldl_request_cancel function mishandles a large lock_count parameter.
CVE-2019-20429
PUBLISHED: 2020-01-27
In the Lustre file system before 2.12.3, the ptlrpc module has an out-of-bounds read and panic (via a modified lm_bufcount field) due to the lack of validation for specific fields of packets sent by a client. This is caused by interaction between sptlrpc_svc_unwrap_request and lustre_msg_hdr_size_v2...
CVE-2019-20430
PUBLISHED: 2020-01-27
In the Lustre file system before 2.12.3, the mdt module has an LBUG panic (via a large MDT Body eadatasize field) due to the lack of validation for specific fields of packets sent by a client.