Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Threat Intelligence

3/5/2019
10:30 AM
Chris Rouland
Chris Rouland
Commentary
Connect Directly
Twitter
LinkedIn
RSS
E-Mail vvv
50%
50%

Artificial Intelligence: The Terminator of Malware

Is it possible that the combination of AI, facial recognition, and the coalescence of global mass-hack data could lead us toward a Skynet-like future?

For many of us, The Terminator series introduced us to the potential dangers of artificial intelligence (AI). As Skynet's advanced AI became self-aware, it concluded that humanity was a threat to its existence and sprang into self-preservation mode, ultimately triggering a nuclear holocaust and deploying an army of Terminators to battle the resistance.

While this was purely fictional back in 1984, 35 years later, AI-powered threats are the new reality and raises the question: Are we headed for a Skynet-like future in which AI takes over the world? Perhaps we're not quite there yet, but the ingredients are all there and it could be a potential recipe for disaster.

As our understanding of AI progresses and evolves, AI attacks will get more sophisticated and continue to improve. Maturing open source machine learning tools like TensorFlow from Google and others will be used in malcode, distributing even more damaging botnets, viruses, worms, trojans, targeted phishing expeditions, and so on. Of particular concern is the combination of machine learning, automated facial recognition and huge amounts of data in recent dumps. This  puts billions of people at risk of being compromised more than ever before.

One recent data dump is now raising alarm flags because it has the potential to affect millions of people. Known as Collections #1–5, well over 2 billion usernames and passwords were dumped onto the Dark Web. With data the foundation of AI, hackers can now carry out machine learning-based operations that leverage automated facial recognition and the information in Collections #1–5 to traverse social media networks and other sites to carry out automated spearphishing campaigns and a variety of other villainous exploits.

An AI populated with billions of email password pairs has a huge head start on leveraging evasive and powerful attack tools such as DeepLocker and Social Mapper. Consider the kill chain of shared credentials between corporate and personal emails. That's a very soft target for the Terminator of malware. Even if only 1% of the passwords in the "Collections" are still accurate and shared across accounts, that is well over 20 million vulnerable victims. From statistical analysis, we know the rate is far higher than that.

So, how bad could it get? Realistically, a mass collective hive of botnets with knowledge of credentials, email, facial recognition, and social networks could make AI phishing lures that will be make email unusable. Theoretically, with Collections #1–5 at its disposal, Skynet could now take over the world.

Which leads us to the need for a Resistance. Fortunately, Skynet does not exist… at least, not that we know of. But it will take a lot more than John Connor to win the AI war with cybercriminals. It will take a global coalition of brilliant minds and organizations from the private and public sectors fighting fire with fire, deploying AI-based security solutions that can keep pace, outmaneuver, and outthink these AI-powered attacks. The US Department of Defense echoed this sentiment in a recently unveiled summary of its official artificial intelligence strategy:

We cannot succeed alone; this undertaking requires the skill and commitment of those in government, close collaboration with academia and non-traditional centers of innovation in the commercial sector, and strong cohesion among international allies and partners. We must learn from others to help us achieve the fullest understanding of the potential of AI, and we must lead in responsibly developing and using these powerful technologies, in accordance with the law and our values.

Perhaps the late Stephen Hawking said it best: "Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization."

Or as the Terminator might say: "Hasta la vista, baby."

Related Content:

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry's most knowledgeable IT security experts. Check out the Interop agenda here.

Chris Rouland is Co-Founder and Chief Executive Officer of Phosphorus Cybersecurity, Inc. A 25-year veteran of the information security industry, Chris is a renowned leader in cybersecurity innovation and disruption. In his career, Chris has founded and led several ... View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Threaded  |  Newest First  |  Oldest First
Stop Defending Everything
Kevin Kurzawa, Senior Information Security Auditor,  2/12/2020
Small Business Security: 5 Tips on How and Where to Start
Mike Puglia, Chief Strategy Officer at Kaseya,  2/13/2020
Architectural Analysis IDs 78 Specific Risks in Machine-Learning Systems
Jai Vijayan, Contributing Writer,  2/13/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Current Issue
6 Emerging Cyber Threats That Enterprises Face in 2020
This Tech Digest gives an in-depth look at six emerging cyber threats that enterprises could face in 2020. Download your copy today!
Flash Poll
How Enterprises Are Developing and Maintaining Secure Applications
How Enterprises Are Developing and Maintaining Secure Applications
The concept of application security is well known, but application security testing and remediation processes remain unbalanced. Most organizations are confident in their approach to AppSec, although others seem to have no approach at all. Read this report to find out more.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-1842
PUBLISHED: 2020-02-18
Huawei HEGE-560 version 1.0.1.20(SP2); OSCA-550 and OSCA-550A version 1.0.0.71(SP1); and OSCA-550AX and OSCA-550X version 1.0.0.71(SP2) have an insufficient authentication vulnerability. An attacker can access the device physically and perform specific operations to exploit this vulnerability. Succe...
CVE-2020-8010
PUBLISHED: 2020-02-18
CA Unified Infrastructure Management (Nimsoft/UIM) 9.20 and below contains an improper ACL handling vulnerability in the robot (controller) component. A remote attacker can execute commands, read from, or write to the target system.
CVE-2020-8011
PUBLISHED: 2020-02-18
CA Unified Infrastructure Management (Nimsoft/UIM) 9.20 and below contains a null pointer dereference vulnerability in the robot (controller) component. A remote attacker can crash the Controller service.
CVE-2020-8012
PUBLISHED: 2020-02-18
CA Unified Infrastructure Management (Nimsoft/UIM) 9.20 and below contains a buffer overflow vulnerability in the robot (controller) component. A remote attacker can execute arbitrary code.
CVE-2020-1791
PUBLISHED: 2020-02-18
HUAWEI Mate 20 smartphones with versions earlier than 10.0.0.185(C00E74R3P8) have an improper authorization vulnerability. The system has a logic judging error under certain scenario, successful exploit could allow the attacker to switch to third desktop after a series of operation in ADB mode.