Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Risk

10/4/2019
10:00 AM
Marc Wilczek
Marc Wilczek
Commentary
Connect Directly
Twitter
LinkedIn
Google+
RSS
E-Mail vvv
50%
50%

Cybercrime: AI's Growing Threat

Cyberecurity incidents expected to rise by nearly 70% and cost $5 trillion annually by 2024.

These days, the use of artificial intelligence (AI) is becoming increasingly commonplace. Companies and governments use facial recognition technology to verify our identities; virtually every smartphone on the market has mapping and translation apps; and machine learning is an indispensable tool in diverse fields including conservation, healthcare, and agriculture.

As the power, influence, and reach of AI spreads, many international observers are scrutinizing the dual nature of AI technology. They're considering not only AI's positive transformative effects on human society and development — think of medical AI applications that help diagnose cancer early — but also its downsides, particularly in terms of the global security threats to which it can expose us all.

AI as a Weapon
As AI gets better and more sophisticated, it also enables cybercriminals to use deep learning and AI to breach security systems (just as cybersecurity experts use the same technology tools to detect suspicious online behavior). Deepfakes — using AI to superimpose one person's face or voice over another in a video, for example — and other advanced AI-based methods will probably play a larger role in social media cybercrime and social engineering. It sounds scary, and it's not science fiction.

In one noteworthy recent example of a deepfake that generated headlines in The Wall Street Journal, criminals employed AI-based software to replicate a CEO's voice to command a cash transfer of €220,000 (approximately $243,000). Cybercrime experts called it a rare case of hacking that leveraged artificial intelligence.

In that scam, the head of a UK-based energy company thought he was on the phone with his boss, the chief executive of the firm's German parent firm, who directed him to send the money to a Hungarian supplier. The German "caller" claimed the request was urgent and ordered the unwitting UK executive to initiate the transfer within the hour.

The IoT is a Bonanza for Cybercriminals
That's just one instance of how AI has huge potential to transform how crime, and cybercrime in particular, is conducted. Using AI, bad actors will be able to refine their ability to launch attacks and discover new targets, such as by altering the signaling system in driverless cars. The growing ubiquity of the Internet of Things (IoT) is a particular gold mine for cybercriminals. There's also increasing convergence of operational IT and corporate IT; which means that the production lines, warehouses, conveyor belts, and cooling systems of tomorrow will be even more exposed to an unprecedented volume of cyber threats. Even pumps at gas stations could be controlled or taken offline from afar by hackers.

Like any connected device that's improperly secured (or not secured at all), it's possible that Internet-connected gas pumps and other smart devices could be co-opted into botnets for use in distributed denial-of-service attacks, with bad guys recruiting them in their efforts to overload online services.

But it's not only companies that are vulnerable. Cyberattacks on critical infrastructure can lead to widespread blackouts that can cripple a major city, an entire region, or a country for days or weeks, which makes such attacks a massively destructive weapon for malicious nation-states. North Korea is infamous for cyber warfare capabilities including sabotage, exploitation, and data theft. According to the United Nations, the country has racked up roughly $2 billion via "widespread and increasingly sophisticated" cyberattacks to bankroll its weapons of mass destruction programs.

Damages to Exceed $5 Trillion by 2024
Because of the general trend toward corporate digitization and the growing volume of everyday activities that require online services, society is becoming ever more vulnerable to cyberattacks. Juniper Research recently reported that the price tag of security breaches will rise from $3 trillion each year to over $5 trillion in 2024, an average annual growth of 11%. As government regulation gets stricter, this growth will be driven mainly by increasingly higher fines for data breaches as well as business losses incurred by enterprises that rely on digital services.

According to Jupiter's report, the cost per breach will steadily rise in the future. The levels of data disclosed certainly will make headlines, but they won't directly impact breach costs, as most fines and lost business are not directly related to breach sizes.

AI-Based Attacks Require AI-Based Defenses
As cyberattacks become more increasingly devious and hard to detect, companies need to give their defense strategies some serious second or third thoughts. AI can constantly improve itself and change parameters and signatures automatically in response to any defense it's up against. Given the global shortage of IT and cybersecurity talent, merely putting more brilliant and ingenious noses to the grindstone won't solve the problem. The only way to battle a machine is with another machine.

On the plus side, AI has the potential to expand the reach for spotting and defending against cyberattacks, some of which have had worldwide impact. When it comes to detecting anomalies in traffic patterns or modeling user behavior, AI really shines. It can eliminate human error and dramatically reduce complexity. For example, Google stopped 99% of incoming spam using its machine learning technology. Some observers say AI may become a useful tool to link attacks to their perpetrators — whether it's a criminal act by a lone actor or a security breach by a rogue state.

In the cybersecurity world, the bad guys are picking up the pace. As a result, the corporate sector must pay attention to AI's potential as a first line of defense. Doing so is the only way to understand the threats and respond to the consequences of cybercrime.

Related Content:

Check out The Edge, Dark Reading's new section for features, threat data, and in-depth perspectives. Today's top story: "How the City of Angels Is Tackling Cyber Devilry."

Marc Wilczek is a columnist and recognized thought leader, geared toward helping organizations drive their digital agenda and achieve higher levels of innovation and productivity through technology. Over the past 20 years, he has held various senior leadership roles across ... View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Oldest First  |  Newest First  |  Threaded View
Why Cyber-Risk Is a C-Suite Issue
Marc Wilczek, Digital Strategist & CIO Advisor,  11/12/2019
Black Hat Q&A: Hacking a '90s Sports Car
Black Hat Staff, ,  11/7/2019
The Cold Truth about Cyber Insurance
Chris Kennedy, CISO & VP Customer Success, AttackIQ,  11/7/2019
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Current Issue
7 Threats & Disruptive Forces Changing the Face of Cybersecurity
This Dark Reading Tech Digest gives an in-depth look at the biggest emerging threats and disruptive forces that are changing the face of cybersecurity today.
Flash Poll
Rethinking Enterprise Data Defense
Rethinking Enterprise Data Defense
Frustrated with recurring intrusions and breaches, cybersecurity professionals are questioning some of the industrys conventional wisdom. Heres a look at what theyre thinking about.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2019-16863
PUBLISHED: 2019-11-14
STMicroelectronics ST33TPHF2ESPI TPM devices before 2019-09-12 allow attackers to extract the ECDSA private key via a side-channel timing attack because ECDSA scalar multiplication is mishandled, aka TPM-FAIL.
CVE-2019-18949
PUBLISHED: 2019-11-14
SnowHaze before 2.6.6 is sometimes too late to honor a per-site JavaScript blocking setting, which leads to unintended JavaScript execution via a chain of webpage redirections targeted to the user's browser configuration.
CVE-2011-1930
PUBLISHED: 2019-11-14
In klibc 1.5.20 and 1.5.21, the DHCP options written by ipconfig to /tmp/net-$DEVICE.conf are not properly escaped. This may allow a remote attacker to send a specially crafted DHCP reply which could execute arbitrary code with the privileges of any process which sources DHCP options.
CVE-2011-1145
PUBLISHED: 2019-11-14
The SQLDriverConnect() function in unixODBC before 2.2.14p2 have a possible buffer overflow condition when specifying a large value for SAVEFILE parameter in the connection string.
CVE-2011-1488
PUBLISHED: 2019-11-14
A memory leak in rsyslog before 5.7.6 was found in the way deamon processed log messages are logged when $RepeatedMsgReduction was enabled. A local attacker could use this flaw to cause a denial of the rsyslogd daemon service by crashing the service via a sequence of repeated log messages sent withi...