Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Operational Security //

AI

4/2/2018
09:35 AM
Simon Marshall
Simon Marshall
Simon Marshall
50%
50%

Red Bull Powers Security Strategy With AI, Automation

When it comes to security, Red Bull is looking to close the gap by turning toward newer technologies, including automation, AI and machine learning.

Red Bull is well-known for projecting an energetic brand. Behind the scenes, its IT security team also likes to be energetic, however, not in the way the company's commercials would have you believe.

Despite the play-hard image of the brand, the Red Bull team likes to be very Zen. About a year ago, it began investing in automating some of its security processes, so the organization could free-up detection and response resources to become higher value, less tactical brains.

At times, an enterprise security strategy can be dangerous when it gets overly defensive. However, when security teams want to be strategic, as Red Bull has shown, automation technology can actually help the security team think, and not just act. (See Unknown Document 736875)

"We don't want to lose the right focus or become over-protective," Jimmy Heschl, Red Bull's CISO, told Security Now, explaining how sometimes reacting to and resolving an incident can be a mistake. Even reacting and remediating correctly, shouldn't ideally -- in his world -- be done manually because it's at the cost of contending against hackers who have time on their hands and are very inventive.

"Overwhelming or excessively intrusive security controls are significant roadblocks, when [we] want to be creative, spontaneous and innovative," Heschl said. "Overreaction from security -- as this is done by colleagues that are primarily driven by various compliance requirements -- has a significant impact on these objectives."

Advent of security automation
A number of tech vendors including Demisto, IBM's Resilient Systems, Microsoft's Hexadite, and Red Bull's vendor, EnSilo, are capturing the mood with orchestration and automation offerings, powered by artificial intelligence, and more specifically, machine learning. (See Automation Answers Security Skills Shortage.)

Gartner's 2017 "Innovation Insight for Security Orchestration, Automation and Response" report finds enterprises hobbled because of analyst time lost to manual, heavy-lift processes.

"Security operations still primarily rely on manually created and maintained, document-based procedures for operations, which leads to issues such as longer analyst onboarding times, stale procedures, tribal knowledge and inconsistencies in executing operational functions," according to the report.

Increasingly, the engine behind endpoint detection and response (EDR) system automation is AI and machine learning. These technologies are in the hype curve and for some organizations, offer not only to automate manual work, but to actively couple learned threat knowledge with their own business security policies and then independently remediate attacks.

But a lack of human intervention, on the other hand, worries Red Bull, for one.

"Automated response is a challenge in itself," Heschl said. "It has to do with giving away control, and automation always has some drawbacks. It's not the detection function that I fear, but automated response from simple mail filters and network blocks; via user and access management to advanced countermeasures: the more complexity you have in response, the more that can go wrong."

The cost of dwell time
The elapsed time between threat detection and response -- dwell time -- is what costs enterprises money in terms of increased risk of data theft or damage, and the price of running through investigation and remediation processes that usually take months.

A 2017 study by the Ponemon Institute of 419 companies, entitled "The Cost of Data Breach," reported that the time to identify and the time to contain malicious attacks were an average 214 and 77 days respectively. The average cost per breach is currently about $4 million.


Want to hear more about the leading operator use cases for AI technologies? Join us in Austin from May 14-16 at the fifth-annual Big Communications Event. There's still time to register and communications service providers get in free!

Although a current drive towards zero dwell time is noble, it's a massive challenge. Fortunately, a more realistic return on investment in automated EDR is already benefiting Red Bull.

"It's the speed of initiating action [that's important]," Heschl said. "On the other side, it's the automation of response that leaves [us] independent of scarce resources.

"It helps me address my big fear: losing focus. My team can use their time to think and to improve rather than hunt adversaries," Heschl added.

Although it's the computing and learning crunch power that AI and machine learning that support this drive, despite the hype, the technology itself is relatively unimportant.

"I believe that machine learning and AI are the means to meet and achieve security initiatives," EnSilo CEO Roy Katmor said. "[But] organizations believe in added value -- namely alert efficacy, in pre- and post-infection, and operational efficiency via automation. The technology behind it is less relevant."

Related posts:

— Simon Marshall, Technology Journalist, special to Security Now

Comment  | 
Print  | 
More Insights
Comments
Oldest First  |  Newest First  |  Threaded View
COVID-19: Latest Security News & Commentary
Dark Reading Staff 10/23/2020
Modern Day Insider Threat: Network Bugs That Are Stealing Your Data
David Pearson, Principal Threat Researcher,  10/21/2020
Are You One COVID-19 Test Away From a Cybersecurity Disaster?
Alan Brill, Senior Managing Director, Cyber Risk Practice, Kroll,  10/21/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
Special Report: Computing's New Normal
This special report examines how IT security organizations have adapted to the "new normal" of computing and what the long-term effects will be. Read it and get a unique set of perspectives on issues ranging from new threats & vulnerabilities as a result of remote working to how enterprise security strategy will be affected long term.
Flash Poll
How IT Security Organizations are Attacking the Cybersecurity Problem
How IT Security Organizations are Attacking the Cybersecurity Problem
The COVID-19 pandemic turned the world -- and enterprise computing -- on end. Here's a look at how cybersecurity teams are retrenching their defense strategies, rebuilding their teams, and selecting new technologies to stop the oncoming rise of online attacks.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-27743
PUBLISHED: 2020-10-26
libtac in pam_tacplus through 1.5.1 lacks a check for a failure of RAND_bytes()/RAND_pseudo_bytes(). This could lead to use of a non-random/predictable session_id.
CVE-2020-1915
PUBLISHED: 2020-10-26
An out-of-bounds read in the JavaScript Interpreter in Facebook Hermes prior to commit 8cb935cd3b2321c46aa6b7ed8454d95c75a7fca0 allows attackers to cause a denial of service attack or possible further memory corruption via crafted JavaScript. Note that this is only exploitable if the application usi...
CVE-2020-26878
PUBLISHED: 2020-10-26
Ruckus through 1.5.1.0.21 is affected by remote command injection. An authenticated user can submit a query to the API (/service/v1/createUser endpoint), injecting arbitrary commands that will be executed as root user via web.py.
CVE-2020-26879
PUBLISHED: 2020-10-26
Ruckus vRioT through 1.5.1.0.21 has an API backdoor that is hardcoded into validate_token.py. An unauthenticated attacker can interact with the service API by using a backdoor value as the Authorization header.
CVE-2020-15272
PUBLISHED: 2020-10-26
In the git-tag-annotation-action (open source GitHub Action) before version 1.0.1, an attacker can execute arbitrary (*) shell commands if they can control the value of [the `tag` input] or manage to alter the value of [the `GITHUB_REF` environment variable]. The problem has been patched in version ...