Analytics

9/6/2016
02:30 PM
Guy Caspi
Guy Caspi
Commentary
Connect Directly
Twitter
LinkedIn
RSS
E-Mail vvv
50%
50%

Introducing Deep Learning: Boosting Cybersecurity With An Artificial Brain

With nearly the same speed and precision that the human eye can identify a water bottle, the technology of deep learning is enabling the detection of malicious activity at the point of entry in real-time.

Editor’s Note: Last month, Dark Reading editors named Deep Instinct the most innovative startup in its first annual Best of Black Hat Innovation Awards program at Black Hat 2016 in Las Vegas. For more details on the competition and other results, read Best Of Black Hat Innovation Awards: And The Winners Are

It’s hot outside and you’re thirsty. As you reach for a water bottle, you don’t pause to analyze its material, size or shape in order to determine whether it’s a water bottle. Instead, you immediately reach for it, with complete confidence in its identification.

If I show the same water bottle to any traditional computer vision module, it will easily recognize it. If I partially obstruct the image with my fingers, then traditional computer vision modules will have difficulty recognizing it. But, if I apply an advanced form of artificial intelligence that is called deep learning, which is resistant to small changes and can generalize from partial data, it would be very easy for the computer vision module to correctly recognize the water bottle, even when most of the image is obstructed.

Deep learning, also known as neural networks, is “inspired” by the brain’s ability to learn to identify objects. Take vision as an example. Our brain can process raw data derived from our sensory inputs and learn the high-level features all on its own. Similarly, in deep learning, raw data is fed through the deep neural network, which learns to identify the object on which it is trained. Machine learning, on the other hand, requires manual intervention in selecting which features to process through the machine learning modules. As a result, the process is slower and accuracy can be affected by human error. Deep learning's more sophisticated, self-learning capability results in higher accuracy and faster processing.

Similar to image recognition, in cybersecurity, more than 99% of new threats and malware are actually very small mutations of previously existing ones. And even that 1% of supposedly brand-new malware are rather substantial mutations of existing malicious threats and concepts. But, despite this fact, cybersecurity solutions -- even the most advanced ones that use dynamic analysis and traditional machine learning -- have great difficulty in detecting a large portion of these new malware. The result is vulnerabilities that leave organizations exposed to data breaches, data theft, seizure for ransomware, data corruption, and destruction. We can solve this problem by applying deep learning to cybersecurity.

The history of malware detection in a nutshell
Signature-based solutions are the oldest form of malware detection, which is why they are also called legacy solutions. To detect malware, the antivirus engine compares the contents of an unidentified piece of code to its database of known malware signatures. If the malware hasn’t been seen before, these methods rely on manually tuned heuristics to generate a handcrafted signature, which is then released as an update to clients. This process is time-consuming, and sometimes signatures are released months after the initial detection. As a result, this detection method can’t keep up with the million new malware variants that are created daily. This leaves organizations vulnerable to the new threats as well as threats that have already been detected but have yet to have a signature released.

Heuristic techniques identify malware based on the behavioral characteristics in the code, which has led to behavioral-based solutions. This malware detection technique analyzes the malware’s behavior at runtime, instead of considering the characteristics hardcoded in the malware code itself. The main limitation of this malware detection method is that it is able to discover malware only once the malicious actions have begun. As a result, prevention is delayed, sometimes available only once it’s too late.

Sandbox solutions are a development of the behavioral-based detection method. These solutions execute the malware in a virtual (sandbox) environment to determine whether the file is malicious or not, instead of detecting the behavioral fingerprint at runtime. Although this technique has shown to be quite effective in its detection accuracy, it is achieved at the cost of real-time protection because of the time-consuming process involved. Additionally, newer types of malicious code that can evade sandbox detection by stalling their execution in a sandbox environment are posing new challenges to this type of malware detection and consequently, prevention capabilities.

Malware detection using AI: machine learning & deep learning
Incorporating AI capabilities to enable more sophisticated detection capabilities is the latest step in the evolution of cybersecurity solutions. Malware detection methods that are based on machine learning AI apply elaborate algorithms to classify a file’s behavior as malicious or legitimate according to feature engineering that is conducted manually. However, this process is time-consuming and requires massive human resources to tell the technology on which parameters, variables or features to focus during the file classification process. Additionally, the rate of malware detection is still far from 100%. 

Deep learning AI is an advanced branch of machine learning, also known as “neural networks” because it is "inspired" by the way the human brain works. In our neocortex, the outer layer of our brain where high-level cognitive tasks are performed, we have several tens of billions of neurons. These neurons, which are largely general purpose and domain-agnostic, can learn from any type of data. This is the great revolution of deep learning because deep neural networks are the first family of algorithms within machine learning that do not require manual feature engineering. Instead, they learn on their own to identify the object on which they are trained by processing and learning the high-level features from raw data -- very much like the way our brain learns on its own from raw data derived from our sensory inputs.

When applied to cybersecurity, the deep learning core engine is trained to learn without any human intervention whether a file is malicious or legitimate. Deep learning exhibits potentially groundbreaking results in detecting first-seen malware, compared with classical machine learning. In real environment tests on publicly known databases of endpoints, mobile and APT malware, for example, the detection rates of a deep learning solution detected over 99.9% of both substantial and slightly modified malicious code. These results are consistent with improvements achieved by deep learning in other fields, such as computer vision, speech recognition and text understanding.

In the same way humans can immediately identify a water bottle in the real world, the technology advancements of deep learning -- applied to cybersecurity -- can enable the precise detection of new malware threats and fill in the critical gaps that that leave organizations exposed to attacks.

Related Content:

Guy Caspi is a leading mathematician and a data scientist global expert. He has 15 years of extensive experience in applying mathematics and machine learning in a technology elite unit of the Israel Defense Forces (IDF), financial institutions and intelligence organizations ... View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Want Your Daughter to Succeed in Cyber? Call Her John
John De Santis, CEO, HyTrust,  5/16/2018
Don't Roll the Dice When Prioritizing Vulnerability Fixes
Ericka Chickowski, Contributing Writer, Dark Reading,  5/15/2018
Why Enterprises Can't Ignore Third-Party IoT-Related Risks
Charlie Miller, Senior Vice President, The Santa Fe Group,  5/14/2018
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Write a Caption, Win a Starbucks Card! Click Here
Latest Comment: "Security through obscurity"
Current Issue
Flash Poll
[Strategic Security Report] Navigating the Threat Intelligence Maze
[Strategic Security Report] Navigating the Threat Intelligence Maze
Most enterprises are using threat intel services, but many are still figuring out how to use the data they're collecting. In this Dark Reading survey we give you a look at what they're doing today - and where they hope to go.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2018-11311
PUBLISHED: 2018-05-20
A hardcoded FTP username of myscada and password of Vikuk63 in 'myscadagate.exe' in mySCADA myPRO 7 allows remote attackers to access the FTP server on port 2121, and upload files or list directories, by entering these credentials.
CVE-2018-11319
PUBLISHED: 2018-05-20
Syntastic (aka vim-syntastic) through 3.9.0 does not properly handle searches for configuration files (it searches the current directory up to potentially the root). This improper handling might be exploited for arbitrary code execution via a malicious gcc plugin, if an attacker has write access to ...
CVE-2018-11242
PUBLISHED: 2018-05-20
An issue was discovered in the MakeMyTrip application 7.2.4 for Android. The databases (locally stored) are not encrypted and have cleartext that might lead to sensitive information disclosure, as demonstrated by data/com.makemytrip/databases and data/com.makemytrip/Cache SQLite database files.
CVE-2018-11315
PUBLISHED: 2018-05-20
The Local HTTP API in Radio Thermostat CT50 and CT80 1.04.84 and below products allows unauthorized access via a DNS rebinding attack. This can result in remote device temperature control, as demonstrated by a tstat t_heat request that accesses a device purchased in the Spring of 2018, and sets a ho...
CVE-2018-11239
PUBLISHED: 2018-05-19
An integer overflow in the _transfer function of a smart contract implementation for Hexagon (HXG), an Ethereum ERC20 token, allows attackers to accomplish an unauthorized increase of digital assets by providing a _to argument in conjunction with a large _value argument, as exploited in the wild in ...