Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Risk

10/4/2019
10:00 AM
Marc Wilczek
Marc Wilczek
Commentary
Connect Directly
Twitter
LinkedIn
Google+
RSS
E-Mail vvv
50%
50%

Cybercrime: AI's Growing Threat

Cyberecurity incidents expected to rise by nearly 70% and cost $5 trillion annually by 2024.

These days, the use of artificial intelligence (AI) is becoming increasingly commonplace. Companies and governments use facial recognition technology to verify our identities; virtually every smartphone on the market has mapping and translation apps; and machine learning is an indispensable tool in diverse fields including conservation, healthcare, and agriculture.

As the power, influence, and reach of AI spreads, many international observers are scrutinizing the dual nature of AI technology. They're considering not only AI's positive transformative effects on human society and development — think of medical AI applications that help diagnose cancer early — but also its downsides, particularly in terms of the global security threats to which it can expose us all.

AI as a Weapon
As AI gets better and more sophisticated, it also enables cybercriminals to use deep learning and AI to breach security systems (just as cybersecurity experts use the same technology tools to detect suspicious online behavior). Deepfakes — using AI to superimpose one person's face or voice over another in a video, for example — and other advanced AI-based methods will probably play a larger role in social media cybercrime and social engineering. It sounds scary, and it's not science fiction.

In one noteworthy recent example of a deepfake that generated headlines in The Wall Street Journal, criminals employed AI-based software to replicate a CEO's voice to command a cash transfer of €220,000 (approximately $243,000). Cybercrime experts called it a rare case of hacking that leveraged artificial intelligence.

In that scam, the head of a UK-based energy company thought he was on the phone with his boss, the chief executive of the firm's German parent firm, who directed him to send the money to a Hungarian supplier. The German "caller" claimed the request was urgent and ordered the unwitting UK executive to initiate the transfer within the hour.

The IoT is a Bonanza for Cybercriminals
That's just one instance of how AI has huge potential to transform how crime, and cybercrime in particular, is conducted. Using AI, bad actors will be able to refine their ability to launch attacks and discover new targets, such as by altering the signaling system in driverless cars. The growing ubiquity of the Internet of Things (IoT) is a particular gold mine for cybercriminals. There's also increasing convergence of operational IT and corporate IT; which means that the production lines, warehouses, conveyor belts, and cooling systems of tomorrow will be even more exposed to an unprecedented volume of cyber threats. Even pumps at gas stations could be controlled or taken offline from afar by hackers.

Like any connected device that's improperly secured (or not secured at all), it's possible that Internet-connected gas pumps and other smart devices could be co-opted into botnets for use in distributed denial-of-service attacks, with bad guys recruiting them in their efforts to overload online services.

But it's not only companies that are vulnerable. Cyberattacks on critical infrastructure can lead to widespread blackouts that can cripple a major city, an entire region, or a country for days or weeks, which makes such attacks a massively destructive weapon for malicious nation-states. North Korea is infamous for cyber warfare capabilities including sabotage, exploitation, and data theft. According to the United Nations, the country has racked up roughly $2 billion via "widespread and increasingly sophisticated" cyberattacks to bankroll its weapons of mass destruction programs.

Damages to Exceed $5 Trillion by 2024
Because of the general trend toward corporate digitization and the growing volume of everyday activities that require online services, society is becoming ever more vulnerable to cyberattacks. Juniper Research recently reported that the price tag of security breaches will rise from $3 trillion each year to over $5 trillion in 2024, an average annual growth of 11%. As government regulation gets stricter, this growth will be driven mainly by increasingly higher fines for data breaches as well as business losses incurred by enterprises that rely on digital services.

According to Jupiter's report, the cost per breach will steadily rise in the future. The levels of data disclosed certainly will make headlines, but they won't directly impact breach costs, as most fines and lost business are not directly related to breach sizes.

AI-Based Attacks Require AI-Based Defenses
As cyberattacks become more increasingly devious and hard to detect, companies need to give their defense strategies some serious second or third thoughts. AI can constantly improve itself and change parameters and signatures automatically in response to any defense it's up against. Given the global shortage of IT and cybersecurity talent, merely putting more brilliant and ingenious noses to the grindstone won't solve the problem. The only way to battle a machine is with another machine.

On the plus side, AI has the potential to expand the reach for spotting and defending against cyberattacks, some of which have had worldwide impact. When it comes to detecting anomalies in traffic patterns or modeling user behavior, AI really shines. It can eliminate human error and dramatically reduce complexity. For example, Google stopped 99% of incoming spam using its machine learning technology. Some observers say AI may become a useful tool to link attacks to their perpetrators — whether it's a criminal act by a lone actor or a security breach by a rogue state.

In the cybersecurity world, the bad guys are picking up the pace. As a result, the corporate sector must pay attention to AI's potential as a first line of defense. Doing so is the only way to understand the threats and respond to the consequences of cybercrime.

Related Content:

Check out The Edge, Dark Reading's new section for features, threat data, and in-depth perspectives. Today's top story: "How the City of Angels Is Tackling Cyber Devilry."

Marc Wilczek is a columnist and recognized thought leader, geared toward helping organizations drive their digital agenda and achieve higher levels of innovation and productivity through technology. Over the past 20 years, he has held various senior leadership roles across ... View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Navigating Security in the Cloud
Diya Jolly, Chief Product Officer, Okta,  12/4/2019
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Write a Caption, Win a Starbucks Card! Click Here
Latest Comment: "The security team seem to be taking SiegeWare seriously" 
Current Issue
Navigating the Deluge of Security Data
In this Tech Digest, Dark Reading shares the experiences of some top security practitioners as they navigate volumes of security data. We examine some examples of how enterprises can cull this data to find the clues they need.
Flash Poll
Rethinking Enterprise Data Defense
Rethinking Enterprise Data Defense
Frustrated with recurring intrusions and breaches, cybersecurity professionals are questioning some of the industrys conventional wisdom. Heres a look at what theyre thinking about.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2012-1114
PUBLISHED: 2019-12-05
A Cross-Site Scripting (XSS) vulnerability exists in LDAP Account Manager (LAM) Pro 3.6 in the filter parameter to cmd.php in an export and exporter_id action. and the filteruid parameter to list.php.
CVE-2012-1115
PUBLISHED: 2019-12-05
A Cross-Site Scripting (XSS) vulnerability exists in LDAP Account Manager (LAM) Pro 3.6 in the export, add_value_form, and dn parameters to cmd.php.
CVE-2012-1592
PUBLISHED: 2019-12-05
A local code execution issue exists in Apache Struts2 when processing malformed XSLT files, which could let a malicious user upload and execute arbitrary files.
CVE-2019-16770
PUBLISHED: 2019-12-05
A poorly-behaved client could use keepalive requests to monopolize Puma's reactor and create a denial of service attack. If more keepalive connections to Puma are opened than there are threads available, additional connections will wait permanently if the attacker sends requests frequently enough.
CVE-2019-19609
PUBLISHED: 2019-12-05
The Strapi framework before 3.0.0-beta.17.8 is vulnerable to Remote Code Execution in the Install and Uninstall Plugin components of the Admin panel, because it does not sanitize the plugin name, and attackers can inject arbitrary shell commands to be executed by the execa function.