Analytics

3/16/2018
10:30 AM
Hamid Karimi
Hamid Karimi
Commentary
Connect Directly
LinkedIn
RSS
E-Mail vvv
100%
0%

The Containerization of Artificial Intelligence

AI automates repetitive tasks and alleviates mundane functions that often haunt decision makers. But it's still not a sure substitute for security best practices.

Artificial intelligence (AI) holds the promise of transforming both static and dynamic security measures to drastically reduce organizational risk exposure. Turning security policies into operational code is a daunting challenge facing agile DevOps today. In the face of constantly evolving attack tools, building a preventative defense requires a large set of contextual data such as historic actuals as well as predictive analytics and advanced modeling. Even if such feat is accomplished, SecOps still needs a reactive, near real-time response based on live threat intelligence to augment it.

While AI is more hype than reality today, machine intelligence — also referred to as predictive machine learning — driven by a meta-analysis of large data sets that uses correlations and statistics, provides practical measures to reduce the need for human interference in policy decision-making.

A typical by-product of such application is the creation of models of behavior that can be shared across policy stores for baselining or policy modifications. The impact goes beyond SecOps and can provide the impetus for integration within broader DevOps. Adoption of AI can be disruptive to organizational processes and must sometimes be weighed in the context of dismantling analytics and rule-based models.

The application of AI must be constructed on the principle of shared security responsibility; based on this model, both technologists and organizational leaders (CSOs, CTOs, CIOs) will accept joint responsibility for securing the data and corporate assets because security is no longer strictly the domain of specialists and affects both operational and business fundamentals. The specter of draconian regulatory compliance such as fines articulated by the EU's General Data Protection Regulation provides an evocative forcing function.

Focus on Specifics
Instead of perceiving AI as a cure-all remedy, organizations should focus on specific areas where AI holds the promise of greater effectiveness. There are specific use cases that provide a more fertile ground for the deployment and evolution of AI: rapid expansion of cloud computing, microsegmentation, and containers offer good examples. Even in these categories, shared owners must balance the promises and perils of deploying AI by recognizing the complexity of technology while avoiding the cost of totally ignoring it.

East-west and north-south architecture of data flow has its perils as we witnessed in the recent near-meltdown of public cloud services. The historic emphasis on capacity and scaling has brought us to clever model of computing which involves many layers of abstraction. With abstraction, we have essentially removed the classic stack model and therefore adding security to it presents a serious challenge.

Furthermore, the focus away from the nuts and bolts of infrastructure to application development in isolation and insulation has given birth to the expectation that even geo-scale applications inside containers and Web-scale micro services can be independently secured while maintaining an automated and scalable middleware. Hyperscale computing, relying on millisecond availability in distributed zones, is more than an infrastructure play and increasingly relies on microsegmentation and container-based application services — a phenomenon whose long-term success depends on AI.

In the '90s, VLANs were supposed to give us protective isolation and the ability to offer a productive computing space based on roles and responsibilities. That promise had fallen far short of expectations. Microsegmentation and containers are in a way a post-computing evolution of VLANs. They have brought other benefits such as reducing pressure on firewall rules; no longer there is a need to keep track of exponentially growing rules with little visibility in situations that lead to false positives and false negatives. Although the overall attack surface is reduced, and collateral damage is partially abated, the potential for more persistent breaches are not reduced. AI tools can zero in on a smaller subset of data and create better mapping without affecting the user productivity or undermining the overlay concept of segmented computing.

It is pretty much a one-two-three punch: the organization can look at all available metadata, feed that to the AI, and then take the output of AI to predictive analytics engines and create advanced modeling of potential attacks that are either in progress or will soon commence. We are still a few years away from the implementation of another potential step: machine-to-machine learning and security measures whereby machines can observe and absorb relevant data and modify their posture to protect themselves from predicted attacks.

AI can also provide substantial value in other emerging areas such as autonomous driving. Cars are increasingly resembling computing machines with direct cloud command and control. From offline modeling based on fuzzing to real-time analysis of sensor data, we may rely on AI to reduce risks and liabilities.

Artificial intelligence is not a panacea; however, it automates repetitive tasks and alleviates mundane functions that often haunt security decision makers. Like other innovations in security, it will go through its evolutionary cycle and eventually finds its rightful place. In the meantime, there is still no sure substitute for security best practices.

Related Content:

 

Black Hat Asia returns to Singapore with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier solutions and service providers in the Business Hall. Click for information on the conference and to register.

Hamid Karimi has extensive knowledge about cybersecurity, and for the past 15 years his focus has been exclusively in the security space, covering diverse areas of cryptography, strong authentication, vulnerability management, and malware threats, as well as cloud and network ... View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
AnnaEverson
50%
50%
AnnaEverson,
User Rank: Strategist
3/29/2018 | 10:29:25 AM
Analys
You need more example - nothing to analyse( 
WebAuthn, FIDO2 Infuse Browsers, Platforms with Strong Authentication
John Fontana, Standards & Identity Analyst, Yubico,  9/19/2018
Turn the NIST Cybersecurity Framework into Reality: 5 Steps
Mukul Kumar & Anupam Sahai, CISO & VP of Cyber Practice and VP Product Management, Cavirin Systems,  9/20/2018
NSS Labs Files Antitrust Suit Against Symantec, CrowdStrike, ESET, AMTSO
Kelly Jackson Higgins, Executive Editor at Dark Reading,  9/19/2018
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Current Issue
Flash Poll
The Risk Management Struggle
The Risk Management Struggle
The majority of organizations are struggling to implement a risk-based approach to security even though risk reduction has become the primary metric for measuring the effectiveness of enterprise security strategies. Read the report and get more details today!
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2018-11763
PUBLISHED: 2018-09-25
In Apache HTTP Server 2.4.17 to 2.4.34, by sending continuous, large SETTINGS frames a client can occupy a connection, server thread and CPU time without any connection timeout coming to effect. This affects only HTTP/2 connections. A possible mitigation is to not enable the h2 protocol.
CVE-2018-14634
PUBLISHED: 2018-09-25
An integer overflow flaw was found in the Linux kernel's create_elf_tables() function. An unprivileged local user with access to SUID (or otherwise privileged) binary could use this flaw to escalate their privileges on the system. Kernel versions 2.6.x, 3.10.x and 4.14.x are believed to be vulnerabl...
CVE-2018-1664
PUBLISHED: 2018-09-25
IBM DataPower Gateway 7.1.0.0 - 7.1.0.23, 7.2.0.0 - 7.2.0.21, 7.5.0.0 - 7.5.0.16, 7.5.1.0 - 7.5.1.15, 7.5.2.0 - 7.5.2.15, and 7.6.0.0 - 7.6.0.8 as well as IBM DataPower Gateway CD 7.7.0.0 - 7.7.1.2 echoing of AMP management interface authorization headers exposes login credentials in browser cache. ...
CVE-2018-1669
PUBLISHED: 2018-09-25
IBM DataPower Gateway 7.1.0.0 - 7.1.0.23, 7.2.0.0 - 7.2.0.21, 7.5.0.0 - 7.5.0.16, 7.5.1.0 - 7.5.1.15, 7.5.2.0 - 7.5.2.15, and 7.6.0.0 - 7.6.0.8 as well as IBM DataPower Gateway CD 7.7.0.0 - 7.7.1.2 are vulnerable to a XML External Entity Injection (XXE) attack when processing XML data. A remote atta...
CVE-2018-1539
PUBLISHED: 2018-09-25
IBM Rational Engineering Lifecycle Manager 5.0 through 5.02 and 6.0 through 6.0.6 could allow remote attackers to bypass authentication via a direct request or forced browsing to a page other than URL intended. IBM X-Force ID: 142561.