Artificial intelligence and machine learning are being weaponized using the same logic and functionality that legitimate organizations use.

Satish Abburi, Founder of Elysium Analytics

March 21, 2019

5 Min Read

The adversarial use of artificial intelligence (AI) and machine learning (ML) in malicious ways by attackers may be embryonic, but the prospect is becoming real. It's evolutionary: AI and ML gradually have found their way out of the labs and deployed for security defenses, and now they're increasingly being weaponized to overcome these defenses by subverting the same logic and underlying functionality.

Hackers and CISOs alike have access to the power of these developments, some of which are turning into off-the-shelf offerings that are plug-and-play capabilities enabling hackers to get up and running quickly. It was only a matter of time before hackers started taking advantage of the flexibility of AI to find weaknesses as enterprises roll it out in their defensive strategies.

The intent of intelligence-based assaults remains the same as "regular" hacking. They could be politically motivated incursions, nation-state attacks, enterprise attacks to exfiltrate intellectual property, or financial services attacks to steal funds — the list is endless. AI and ML are normally considered a force for good. But in the hands of bad actors, they can wreak serious damage. Are we heading toward a future where bots will battle each other in cyberspace?

When Good Software Turns Bad
Automated penetration testing using ML is a few years old. Now, tools such as Deep Exploit can be used by adversaries to pen test their targeted organizations and find open holes in defenses in 20 to 30 seconds — it used to take hours. ML models speed the process by quickly ingesting data, analyzing it, and producing results that are optimized for the next stage of attack.

Cloud computing and access to powerful CPUs/GPUs are increasing the risk of these adversaries becoming experts at wielding these AI/ML tool sets, which were designed for the good guys to use.

When combined with AI, ML provides automation platforms for exploit kits and, essentially, we're fast approaching the industrialization of automated intelligence to break down cyber defenses that were constructed with AI and ML.

Many of these successful exploit kits enable a new level of automation that makes attackers more intelligent, efficient, and dangerous. DevOps and many IT groups are using AI and ML for gaining insights into their operations, and attackers are following suit.

Injecting Corrupted Data
As researchers point out, attackers will learn how the enterprise defends itself with ML, then inject the unique computational algorithms and statistical models used by the enterprises with corrupt data to throw off their defensive machine learning models. Ingested data is the key to the puzzle that enables ML to unlock the AI knowledge.

Many ML models in cybersecurity solutions, especially deep learning models, are considered to be black boxes in the industry. They can use over 100,000+ feature inputs to make their determinations and detect the patterns of knowledge to solve a problem, such as the detection of anomalous cyber exploit behaviors in an organization or network.  

From the point of view of the security team, this can require trust in a model or algorithm within the black box that they don't understand, and coupled with the level of trust required, this prompts the question: Can "math" really catch the bad actors?

Data Poisoning
One improvement on the horizon is the ability to enable teams in the security operations center to understand how ML models reach their conclusions rather than having to flat-out trust that the algorithms are doing their jobs. So, when the model says there is anomalous risky behavior, the software can explain the reasoning behind the math and how it came to that conclusion.

This is extremely important when it's difficult to detect if adversaries have injected bad data — or "poisoned" it — into defensive enterprise security tools to retrain the models away from their attack vectors. Adversaries can create a baseline behavioral paradigm by poisoning the ML model data, so their adversarial behaviors artificially attain a low risk score within the enterprise and are allowed to continue their ingress.

What the Future Holds
For other intents — influencing voters, for example — bad actors run ML against Twitter feeds to spot patterns of influence that politicians are using to influence specific groups of voters. Once their ML algorithms find these campaigns and identify their patterns, they can create their own counter-campaigns to manipulate opinion or poison a positive campaign that is being pushed by a political group.

Then, there is the threat of botnets. Mirai was the first to cause widespread havoc, and now there are variants that use new attack vectors to create the zombie hordes of Internet of Things devices. There are even more complex industrial IoT attacks focused on taking down nuclear facilities or even whole smart cities. Researchers have studied how potential advanced botnets can take down water systems and power grids.

The use of AI and ML is off-the-shelf and available to midlevel engineers who no longer need to be data scientists in order to master it. The one thing that keeps this from being a perfect technology for the good actors or the bad actors is how to operationalize machine learning to greatly reduce false positives and false negatives. 

That is what new "cognitive" technologies are aspiring to become — more than the sum of their AI and ML parts — by not just detecting patterns of bad behavior in big data with complete accuracy, but also justifying recommendations about how to deal with them by providing context for the decision-making.

Related Content:

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry's most knowledgeable IT security experts. Check out the Interop agenda here.

About the Author(s)

Satish Abburi

Founder of Elysium Analytics

Satish Abburi is the Founder of Elysium Analytics, the cognitive SIEM (security information and event management) company, incubated at System Soft Technologies, where he also leads the Big Data Solutions practice. Prior to this, Satish was Vice President of Engineering at Hexis, where he led teams in software engineering development, quality engineering, and release management. Satish brings a deep understanding and experience in database technology and enterprise applications in security and SIEM as well as big data and analytics products.

Satish has spent the last decade leading teams in building big data product stacks – data collection, ingestion, storage, search, visualization stack, building modelers, and analytics packages. His team supported machine data collection for the largest (top three) customers in semiconductors, telco, defense, and healthcare. He earned his MS degree in software systems from the Birla Institute of Technology, India, and completed an executive leadership course at Cornell University.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights