Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Vulnerabilities / Threats

8/28/2017
10:30 AM
Hal Lonas
Hal Lonas
Commentary
Connect Directly
LinkedIn
RSS
E-Mail vvv
50%
50%

Cybersecurity: An Asymmetrical Game of War

To stay ahead of the bad guys, security teams need to think like criminals, leverage AI's ability to find malicious threats, and stop worrying that machine learning will take our jobs.

In the cybersecurity industry, we’ve all heard the old adage, "We have to be right 100 percent of the time. Cybercriminals only have to be right once."

It may be daunting, but it’s the reality in which the cybersecurity industry lives every day. We’re facing an asymmetrical game of war and, unfortunately, we’re up against an army of cybercriminals with a vast arsenal of weapons at their fingertips.

As in other combat arenas, asymmetrical warfare in cyberspace describes a situation where one side only has to invest modestly to achieve gains, while the other side must invest heavily to maintain an adequate defense. In the cybersecurity industry, the authors and promoters of malware and ransomware would be the former, while the security industry and potential victims make up the latter. This lopsided investment of time and resources is what makes this war asymmetrical.

Let’s take the recent WannaCry ransomware attack, for example. It was a simple enough form of malware, however it took many by surprise. Through a unique combination of stolen technology and propagation, it was able to land on more than 400,000 machines — all with minimal effort on the part of the perpetrators.

Cybercriminals can afford to be creative, innovative, and test new attacks. Meanwhile security teams invest their resources in creating layers of cybersecurity defenses and basics like network segmentation and phishing education.

And here’s a scary thought: what will happen when cybercriminals focus their energies on leveraging artificial intelligence (AI)?

AI in the wrong hands could cause an explosion of network penetrations, data theft, and a spread of computer viruses that could shut down devices left and right. It could lead to an AI arms race, with unknown consequences. Those little clues that give us hints that an email or web site aren’t really what they claim to be can be cleaned up by a sufficiently smart AI capability. And that’s scary.

Learn from the industry’s most knowledgeable CISOs and IT security experts in a setting that is conducive to interaction and conversation. Click for more info and to register.

While there is some machine power in polymorphic malware — malware that morphs when it lands on a new machine — this type of malware doesn’t evolve every day. Ransomware took off a few years ago, and it hasn’t changed much since then. We are seeing victims fall prey to the same types of attacks over and over again. Cybercriminals are still able to create complete chaos with their tried-and-true tools.

While we can’t always predict what cybercriminals will try next, some of us are already leveraging AI and machine learning to stay ahead of them. It’s not a silver bullet, but machine learning is fast-becoming an important, possibly essential tool for keeping ahead of, or at least quickly detecting, the latest types of attacks. It can improve security by looking at the network on an ongoing basis and leveraging a threat research team’s abilities to create a sum greater than its parts. It sets a baseline to help you detect anomalous behavior. But to stay ahead of the bad guys, the security industry needs to accomplish a few things: 

First, we need to think like cybercriminals. Their main motivation is simple — money. They’re constantly thinking about what small action they can take to produce a large outcome, hence the popularity of phishing. They can push out millions of emails with relative ease, send victims to a short-lived site and reap big benefits. Cybercriminals may tweak their approach, for instance, impersonating technology companies instead of financial institutions (as we found in our 2017 Threat Report), but the mechanisms remain the same. If we leverage machine learning to assist in the mundane or routine tasks of tracking and classifying, our creative minds can be free to think like criminals and come up with out-of-the-box solutions to the next attack.

Second, we need security products to incorporate AI into solutions that truly take advantage of its inherent advantages to find malicious threats. These solutions must incorporate intelligence from the best threat researchers and models ready to analyze data and find threats that are coming into businesses today. These solutions can be generic or vertical-specific. If we can create programs for more companies to leverage, we will get a leg up on cybercrime.

Finally, we need not fear that machine learning will take our jobs. The real threat comes from not utilizing machine learning. Such avoidance forces your best researchers to complete tedious work instead of being creative and innovative, dreaming up ways to anticipate new forms of attack and protect against them. Machine learning provides supplemental help so that threat researchers can work on bigger issues. And because the human touch is essential to monitoring and shaping machine learning models, the result is a net increase in job creation. 

If we want to even the playing field, we need to embrace machine learning to create a more secure world for everyone. While cybercriminals may not widely leverage machine learning today, it’s only a matter of time before they catch up. And when that day comes, security teams everywhere need to be prepared. 

Related Content:

 

Hal Lonas is the chief technology officer at Webroot, a privately held internet security company that provides state-of-the-art, cloud-based software as a service (SaaS) solutions spanning threat intelligence, detection and remediation. Previously the senior VP of product ... View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
fredweiser
50%
50%
fredweiser,
User Rank: Apprentice
1/8/2019 | 11:58:34 PM
Great post
Thanks for the information!
Data Privacy Protections for the Most Vulnerable -- Children
Dimitri Sirota, Founder & CEO of BigID,  10/17/2019
Sodinokibi Ransomware: Where Attackers' Money Goes
Kelly Sheridan, Staff Editor, Dark Reading,  10/15/2019
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
7 Threats & Disruptive Forces Changing the Face of Cybersecurity
This Dark Reading Tech Digest gives an in-depth look at the biggest emerging threats and disruptive forces that are changing the face of cybersecurity today.
Flash Poll
2019 Online Malware and Threats
2019 Online Malware and Threats
As cyberattacks become more frequent and more sophisticated, enterprise security teams are under unprecedented pressure to respond. Is your organization ready?
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2019-18202
PUBLISHED: 2019-10-19
Information Disclosure is possible on WAGO Series PFC100 and PFC200 devices before FW12 due to improper access control. A remote attacker can check for the existence of paths and file names via crafted HTTP requests.
CVE-2019-18209
PUBLISHED: 2019-10-19
templates/pad.html in Etherpad-Lite 1.7.5 has XSS when the browser does not encode the path of the URL, as demonstrated by Internet Explorer.
CVE-2019-18198
PUBLISHED: 2019-10-18
In the Linux kernel before 5.3.4, a reference count usage error in the fib6_rule_suppress() function in the fib6 suppression feature of net/ipv6/fib6_rules.c, when handling the FIB_LOOKUP_NOREF flag, can be exploited by a local attacker to corrupt memory, aka CID-ca7a03c41753.
CVE-2019-18197
PUBLISHED: 2019-10-18
In xsltCopyText in transform.c in libxslt 1.1.33, a pointer variable isn't reset under certain circumstances. If the relevant memory area happened to be freed and reused in a certain way, a bounds check could fail and memory outside a buffer could be written to, or uninitialized data could be disclo...
CVE-2019-4409
PUBLISHED: 2019-10-18
HCL Traveler versions 9.x and earlier are susceptible to cross-site scripting attacks. On the Problem Report page of the Traveler servlet pages, there is a field to specify a file attachment to provide additional problem details. An invalid file name returns an error message that includes the entere...