Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Risk

10/4/2019
10:00 AM
Marc Wilczek
Marc Wilczek
Commentary
Connect Directly
Twitter
LinkedIn
Google+
RSS
E-Mail vvv
50%
50%

Cybercrime: AI's Growing Threat

Cyberecurity incidents expected to rise by nearly 70% and cost $5 trillion annually by 2024.

These days, the use of artificial intelligence (AI) is becoming increasingly commonplace. Companies and governments use facial recognition technology to verify our identities; virtually every smartphone on the market has mapping and translation apps; and machine learning is an indispensable tool in diverse fields including conservation, healthcare, and agriculture.

As the power, influence, and reach of AI spreads, many international observers are scrutinizing the dual nature of AI technology. They're considering not only AI's positive transformative effects on human society and development — think of medical AI applications that help diagnose cancer early — but also its downsides, particularly in terms of the global security threats to which it can expose us all.

AI as a Weapon
As AI gets better and more sophisticated, it also enables cybercriminals to use deep learning and AI to breach security systems (just as cybersecurity experts use the same technology tools to detect suspicious online behavior). Deepfakes — using AI to superimpose one person's face or voice over another in a video, for example — and other advanced AI-based methods will probably play a larger role in social media cybercrime and social engineering. It sounds scary, and it's not science fiction.

In one noteworthy recent example of a deepfake that generated headlines in The Wall Street Journal, criminals employed AI-based software to replicate a CEO's voice to command a cash transfer of €220,000 (approximately $243,000). Cybercrime experts called it a rare case of hacking that leveraged artificial intelligence.

In that scam, the head of a UK-based energy company thought he was on the phone with his boss, the chief executive of the firm's German parent firm, who directed him to send the money to a Hungarian supplier. The German "caller" claimed the request was urgent and ordered the unwitting UK executive to initiate the transfer within the hour.

The IoT is a Bonanza for Cybercriminals
That's just one instance of how AI has huge potential to transform how crime, and cybercrime in particular, is conducted. Using AI, bad actors will be able to refine their ability to launch attacks and discover new targets, such as by altering the signaling system in driverless cars. The growing ubiquity of the Internet of Things (IoT) is a particular gold mine for cybercriminals. There's also increasing convergence of operational IT and corporate IT; which means that the production lines, warehouses, conveyor belts, and cooling systems of tomorrow will be even more exposed to an unprecedented volume of cyber threats. Even pumps at gas stations could be controlled or taken offline from afar by hackers.

Like any connected device that's improperly secured (or not secured at all), it's possible that Internet-connected gas pumps and other smart devices could be co-opted into botnets for use in distributed denial-of-service attacks, with bad guys recruiting them in their efforts to overload online services.

But it's not only companies that are vulnerable. Cyberattacks on critical infrastructure can lead to widespread blackouts that can cripple a major city, an entire region, or a country for days or weeks, which makes such attacks a massively destructive weapon for malicious nation-states. North Korea is infamous for cyber warfare capabilities including sabotage, exploitation, and data theft. According to the United Nations, the country has racked up roughly $2 billion via "widespread and increasingly sophisticated" cyberattacks to bankroll its weapons of mass destruction programs.

Damages to Exceed $5 Trillion by 2024
Because of the general trend toward corporate digitization and the growing volume of everyday activities that require online services, society is becoming ever more vulnerable to cyberattacks. Juniper Research recently reported that the price tag of security breaches will rise from $3 trillion each year to over $5 trillion in 2024, an average annual growth of 11%. As government regulation gets stricter, this growth will be driven mainly by increasingly higher fines for data breaches as well as business losses incurred by enterprises that rely on digital services.

According to Jupiter's report, the cost per breach will steadily rise in the future. The levels of data disclosed certainly will make headlines, but they won't directly impact breach costs, as most fines and lost business are not directly related to breach sizes.

AI-Based Attacks Require AI-Based Defenses
As cyberattacks become more increasingly devious and hard to detect, companies need to give their defense strategies some serious second or third thoughts. AI can constantly improve itself and change parameters and signatures automatically in response to any defense it's up against. Given the global shortage of IT and cybersecurity talent, merely putting more brilliant and ingenious noses to the grindstone won't solve the problem. The only way to battle a machine is with another machine.

On the plus side, AI has the potential to expand the reach for spotting and defending against cyberattacks, some of which have had worldwide impact. When it comes to detecting anomalies in traffic patterns or modeling user behavior, AI really shines. It can eliminate human error and dramatically reduce complexity. For example, Google stopped 99% of incoming spam using its machine learning technology. Some observers say AI may become a useful tool to link attacks to their perpetrators — whether it's a criminal act by a lone actor or a security breach by a rogue state.

In the cybersecurity world, the bad guys are picking up the pace. As a result, the corporate sector must pay attention to AI's potential as a first line of defense. Doing so is the only way to understand the threats and respond to the consequences of cybercrime.

Related Content:

Check out The Edge, Dark Reading's new section for features, threat data, and in-depth perspectives. Today's top story: "How the City of Angels Is Tackling Cyber Devilry."

Marc Wilczek is a columnist and recognized thought leader, geared toward helping organizations drive their digital agenda and achieve higher levels of innovation and productivity through technology. Over the past 20 years, he has held various senior leadership roles across ... View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Threaded  |  Newest First  |  Oldest First
DevSecOps: The Answer to the Cloud Security Skills Gap
Lamont Orange, Chief Information Security Officer at Netskope,  11/15/2019
Attackers' Costs Increasing as Businesses Focus on Security
Robert Lemos, Contributing Writer,  11/15/2019
Human Nature vs. AI: A False Dichotomy?
John McClurg, Sr. VP & CISO, BlackBerry,  11/18/2019
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Write a Caption, Win a Starbucks Card! Click Here
Latest Comment: -when I told you that our cyber-defense was from another age
Current Issue
Navigating the Deluge of Security Data
In this Tech Digest, Dark Reading shares the experiences of some top security practitioners as they navigate volumes of security data. We examine some examples of how enterprises can cull this data to find the clues they need.
Flash Poll
Rethinking Enterprise Data Defense
Rethinking Enterprise Data Defense
Frustrated with recurring intrusions and breaches, cybersecurity professionals are questioning some of the industrys conventional wisdom. Heres a look at what theyre thinking about.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2011-3350
PUBLISHED: 2019-11-19
masqmail 0.2.21 through 0.2.30 improperly calls seteuid() in src/log.c and src/masqmail.c that results in improper privilege dropping.
CVE-2011-3352
PUBLISHED: 2019-11-19
Zikula 1.3.0 build #3168 and probably prior has XSS flaw due to improper sanitization of the 'themename' parameter by setting default, modifying and deleting themes. A remote attacker with Zikula administrator privilege could use this flaw to execute arbitrary HTML or web script code in the context ...
CVE-2011-3349
PUBLISHED: 2019-11-19
lightdm before 0.9.6 writes in .dmrc and Xauthority files using root permissions while the files are in user controlled folders. A local user can overwrite root-owned files via a symlink, which can allow possible privilege escalation.
CVE-2019-10080
PUBLISHED: 2019-11-19
The XMLFileLookupService in NiFi versions 1.3.0 to 1.9.2 allowed trusted users to inadvertently configure a potentially malicious XML file. The XML file has the ability to make external calls to services (via XXE) and reveal information such as the versions of Java, Jersey, and Apache that the NiFI ...
CVE-2019-10083
PUBLISHED: 2019-11-19
When updating a Process Group via the API in NiFi versions 1.3.0 to 1.9.2, the response to the request includes all of its contents (at the top most level, not recursively). The response included details about processors and controller services which the user may not have had read access to.