Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Endpoint

8/10/2018
10:30 AM
Connect Directly
LinkedIn
RSS
E-Mail vvv
50%
50%

The Enigma of AI & Cybersecurity

We've only seen the beginning of what artificial intelligence can do for information security.

Alan Turing is famous for several reasons, one of which is that he cracked the Nazis' seemingly unbreakable Enigma machine code during World War II. Later in life, Turing also devised what would become known as the Turing test for determining whether a computer was "intelligent" — what we would now call artificial intelligence (AI). Turing believed that if a person couldn't tell the difference between a computer and a human in a conversation, then that computer was displaying AI.

AI and information security have been intertwined practically since the birth of the modern computer in the mid-20th century. For today's enterprises, the relationship can generally be broken down into three categories: incident detection, incident response, and situational awareness — i.e., helping a business understand its vulnerabilities before an incident occurs. IT infrastructure has grown so complex since Turing's era that it can be months before personnel notice an intrusion.

Current iterations of computer learning have yielded promising results. Chronicle, which was recently launched by Google's parent company, Alphabet, allows companies to tap its enormous processing power and advanced machine learning capabilities to scan IT infrastructure for unauthorized activity. AI² quickly learns how to differentiate true attacks from merely unusual activity, alleviating a vexing problem for IT security teams: false positives. There are numerous other examples of AI-based solutions, such as Palo Alto Networks' Magnifier, which uses machine learning to automate incident response, utilizing another strength of AI: speed.

These advances arrive at an opportune moment because the risks from cybercrime are rapidly growing; estimates of the cost worldwide is about $600 billion annually. The average cost of a data breach is estimated at $1.3 million for enterprises and $117,000 for small businesses, and companies are taking note. According to ESG research, 12% of enterprise organizations have already deployed AI-based security analytics extensively, and 27% have deployed AI-based security analytics on a limited basis.

Moreover, cybersecurity in the years ahead will be increasingly challenging. Enterprises and computers are relatively static and well-defined at present, but securing information amid the Internet of Things, in which almost every device will be programmable and therefore hackable, is going to be far harder. Soon, we won't just have to safeguard unseen servers anymore but also our cars and household devices.

Unfortunately, AI has also become available to hackers as well. Dark Web developments to date merit serious discussion, such as machine learning that gets better and better at phishing — tricking people into opening imposter messages in order to hack them. Further down the road, machines could take impersonation one step further by learning how to build fake images. Experts are also worried AI-based hacking programs might reroute or even crash self-piloting vehicles, such as delivery drones.

I suspect that in the future, users on the front end will be blissfully unaware that behind the scenes battles between good and bad learning machines rage, with each side continually innovating to outsmart the other. Already, the synthesis of AI and cybersecurity has yielded fascinating results, and there is no doubt we are only at the beginning. I am reminded of a quote by Dr. Turing:"We can only see a short distance ahead, but we can see plenty there that needs to be done."

Related Content:

Learn from the industry's most knowledgeable CISOs and IT security experts in a setting that is conducive to interaction and conversation. Early-bird rate ends August 31. Click for more info

Dr. Dongyan Wang is Chief AI Officer at DeepBrain Chain, the world's first AI computing platform powered by the blockchain. Dr. Wang has almost 20 years of experience in AI and data science, including at several Fortune 500 companies. Among other accomplishments, Dr. Wang has ... View Full Bio
 

Recommended Reading:

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
COVID-19: Latest Security News & Commentary
Dark Reading Staff 9/25/2020
Hacking Yourself: Marie Moe and Pacemaker Security
Gary McGraw Ph.D., Co-founder Berryville Institute of Machine Learning,  9/21/2020
Startup Aims to Map and Track All the IT and Security Things
Kelly Jackson Higgins, Executive Editor at Dark Reading,  9/22/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
Special Report: Computing's New Normal
This special report examines how IT security organizations have adapted to the "new normal" of computing and what the long-term effects will be. Read it and get a unique set of perspectives on issues ranging from new threats & vulnerabilities as a result of remote working to how enterprise security strategy will be affected long term.
Flash Poll
How IT Security Organizations are Attacking the Cybersecurity Problem
How IT Security Organizations are Attacking the Cybersecurity Problem
The COVID-19 pandemic turned the world -- and enterprise computing -- on end. Here's a look at how cybersecurity teams are retrenching their defense strategies, rebuilding their teams, and selecting new technologies to stop the oncoming rise of online attacks.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-15208
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, when determining the common dimension size of two tensors, TFLite uses a `DCHECK` which is no-op outside of debug compilation modes. Since the function always returns the dimension of the first tensor, malicious attackers can ...
CVE-2020-15209
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, a crafted TFLite model can force a node to have as input a tensor backed by a `nullptr` buffer. This can be achieved by changing a buffer index in the flatbuffer serialization to convert a read-only tensor to a read-write one....
CVE-2020-15210
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, if a TFLite saved model uses the same tensor as both input and output of an operator, then, depending on the operator, we can observe a segmentation fault or just memory corruption. We have patched the issue in d58c96946b and ...
CVE-2020-15211
PUBLISHED: 2020-09-25
In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices f...
CVE-2020-15212
PUBLISHED: 2020-09-25
In TensorFlow Lite before versions 2.2.1 and 2.3.1, models using segment sum can trigger writes outside of bounds of heap allocated buffers by inserting negative elements in the segment ids tensor. Users having access to `segment_ids_data` can alter `output_index` and then write to outside of `outpu...