Cybersecurity In-Depth: Feature articles on security strategy, latest trends, and people to know.

Countering Voice Fraud in the Age of AI

Caller ID spoofing and AI voice deepfakes are supercharging phone scams. Fortunately, we have tools that help organizations and people protect themselves against the devious combination.

Laura Wilber, Senior Industry Analyst, Enea

April 17, 2024

5 Min Read
Senior Woman Giving Credit Card Details On a Landline Phone
Source: Ian Allenden via Alamy Stock Photo

COMMENTARY
Three seconds of audio is all it takes to clone a voice. Vishing, or voice fraud, has rapidly become a problem many of us know only too well, impacting 15% of the population. Over three-quarters of victims end up losing money, making this the most lucrative type of imposter scam on a per-person basis, according to the US Federal Trade Commission (FTC).

When caller ID spoofing is combined with AI-based deepfake technology, fraudsters can, at very little cost and huge scale, disguise their real numbers and locations and highly convincingly impersonate trusted organizations, such as a bank or local council, or even friends and family.

While artificial intelligence (AI) is presenting all manner of new threats, the ability to falsify caller IDs is still the primary point of entry for sophisticated fraud. This has also posed serious challenges for authenticating genuine calls. Let's delve into the criminal world of caller ID spoofing.

What's Behind the Rise in Voice Fraud?

The democratization of spoofing technology, such as spoofing apps, has made it easier for malicious actors to impersonate legitimate caller IDs, leading to an increase in fraudulent activities conducted via voice calls. One journalist, who said she's known for her rational and meticulous nature, fell victim to a sophisticated scam that exploited her fear and concern for her family's safety. Initially contacted through a spoof call that appeared to be from Amazon, she was transferred to someone posing as an FTC investigator, who convincingly presented her with a fabricated story involving identity theft, money laundering, and threats to her safety.

These stories are becoming increasingly common. Individuals are primed to be skeptical of a withheld, international, or unknown number, but if they see the name of a legitimate company flash up on their phones, they are more likely to answer the call in an accommodating manner.

In addition to spoofing, we are also seeing a rise in AI-generated audio deepfakes. Last year in Canada, criminals scammed senior citizens out of more than $200,000 by using AI to mimic the voices of loved ones in trouble. A mother in the US state of Arizona also received a desperate call from her 15-year-old daughter claiming she'd been kidnapped; the call turned out to be AI-generated. When combined with caller ID spoofing, these deepfakes would be almost impossible for the average person to catch.

As generative AI and AI-based tools become more accessible, this kind of fraud is becoming more common. Cybercriminals don't necessarily need to make direct contact to replicate a voice because over half of people willingly share their voice in some form at least once a week on social media, according to McAfee. Nor do they need exceptional digital skills, since apps do the hard work of cloning the voice based on a short audio clip, as highlighted recently by high-profile deepfakes of US President Joe Biden and singer Taylor Swift.

Entire organizations can fall prey to voice fraud, not just individuals. All it takes is one threat actor to convince one employee to share some seemingly insignificant detail about their business over the phone, which is then used to join the dots and enable a cybercriminal to gain access to sensitive data. It's a particularly worrisome trend in industries where voice communication is a key component of customer interaction, such as banking, healthcare, and government services. Many businesses rely on voice calls for verifying identities and authorizing transactions. As such, they are particularly vulnerable to AI-generated voice fraud.

What We Can Do About It

Regulators, industry bodies, and businesses increasingly recognize the need for collective action against voice fraud. This could include intelligence to better understand scam patterns across regions and industries, the development of industrywide standards to improve voice call security, and tighter regulations governing reporting for network operators.

Regulators around the world are now tightening the rules around AI-based voice fraud. For instance, the US Federal Communications Commission (FCC) has made it illegal for robocalls to use either AI-generated or prerecorded voices. In Finland, the government has imposed new obligations on telecommunications operators to guard against caller ID spoofing and the transfer of scam calls to recipients. The EU is investigating similar measures, primarily driven by banks and other financial institutions that want to keep their customers safe. In all instances, efforts are underway to close the door on caller ID spoofing and smishing (fake text messages), which often serve as the entry point for more sophisticated, AI-based tactics.

Many promising detection tools in development could, in theory, drastically reduce voice fraud. They include voice biometrics, deepfake detectors, AI anomaly detection analysis, blockchain, signaling firewalls, and so on. However, cybercriminals are adept at outpacing and outwitting technological leaps, so only time will tell what will work best.

For businesses of all sizes and sectors, cybersecurity capabilities will become increasingly important for telecom services. Aside from the communications network level, businesses should establish clear policies and processes, such as multifactor authentication that uses a variety of verification methods.

Companies should also raise awareness of the most common fraud tactics. Regular training for employees should focus on recognizing and responding to scams, while customers should be encouraged to report suspicious calls.

On a consumer level, the UK's communications regulator, Ofcom, revealed that more than 41 million people were targeted by suspicious calls or texts over a three-month period in 2022. In other words, although brands and governments have been reiterating the message that legitimate businesses will never ask for money or sensitive information over the phone, continued vigilance is necessary.

The easy availability of cloning tools and spiraling crime levels have experts like the Electronic Frontier Foundation suggesting that people should agree on a family password to combat AI-based fraud attempts. It's a surprisingly low-fi solution to a high-tech challenge.

About the Author(s)

Laura Wilber

Senior Industry Analyst, Enea

Laura Wilber is a Senior Industry Analyst at Enea. She supports cross-functional and cross-portfolio teams with technology and market analysis, product marketing, product strategy, and corporate development. She is also an ESG Advisor & Committee Member. Her expertise includes cybersecurity and networking in enterprise, telecom, and industrial markets, and she loves helping customers meet today's challenges while musing about what the next ten years will bring.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights