There's no denying that the pandemic created a breeding ground for fraud. Cybercriminals thrive in organizations' blind spots, and the shift to digital business created a wealth of vulnerabilities for bad actors to take advantage of.
The expanding fraud economy has caused online scams to mature well beyond siloed attacks and lone basement dwellers. The more than 37 billion records leaked in 2020 alone armed bad actors with the data to execute larger and more devastating attacks.
However, recent crackdowns on Dark Web marketplaces have caused fraudsters to turn to new and under-the-radar places to commit illegal activity. Forced out of Dark Web forums, cybercriminals have set their sights on secure messaging apps, such as Telegram, to conduct fraudulent activity. As a section of the Deep Web, a part of the Internet not indexed by search engines, secure messaging apps are a haven for professional criminals to remain anonymous while wreaking havoc and turning a profit.
However, professional cybercriminals aren't the only ones benefiting from this new era of messaging-app-based fraud. As an accessible platform to almost everyone around the world, these applications have become an attractive vehicle for new fraudsters to experiment with little risk.
The New (Mess)Age of Fraudsters
Today's bad actors focus less on careful, covert crimes and more on getting what they want however they can. This new mindset plays an integral factor in the influx of fraud on messaging apps and forums.
Messaging apps often provide the security and safety features that fraudsters need to remain undetected. Knowing that apps' privacy-focused features and strong encryption act as protection, cybercriminals increasingly gather on messaging forums to resell stolen credentials and run fraud rings. But they're not the only ones. Over the last year, there's been an influx of would-be criminals turning to messaging apps to test out fraud for the first time.
Through messaging forums, individuals can essentially test the fraud waters and gauge the amount of risk they're willing to take. Knowing that many of these newbies are lurking on messaging apps, professional cybercriminals are advertising their services to newcomers on the platforms.
One example is a recent Telegram fraud scheme identified by my company, Sift, whereby professional bad actors steal from restaurants and food delivery services. By advertising their ability to purchase food and beverage orders with stolen information, they’re able to offer opportunistic diners the meal at a heavily discounted rate. The would-be diner then uses cryptocurrency to pay the cybercriminal, who uses stolen credit card details or hacked accounts to purchase the meal and have it delivered to the diner's location.
This scam involves two different types of fraudsters: the professional cybercriminal advertising inexpensive food-purchasing services, and the more passive fraudster who just wants to score an absurdly cheap meal, all to the detriment of the victimized restaurant that sells the food.
The low cost of the meal reduces the perceived risk for the casual fraudster. Knowing they're less likely to be caught buying a single meal, they're more willing to dip their toe in. Then they can decide if they're comfortable buying other services offered on fraud forums, like fake COVID-19 test results or vaccine cards.
Stopping the Messenger
While it is nearly impossible for security teams to shut down this type of fraud on messaging apps, they can mitigate exposure by evolving beyond legacy approaches and adopting a digital trust and safety strategy. This approach bakes risk detection into the entire decision-making process and treats customer safety and experience as one, so enterprises no longer have to choose between increasing revenue and decreasing fraud.
By implementing new processes and technologies such as machine learning, enterprises can more effectively fight fraud at scale. Machine learning is essential not only for identifying new trends but also for changing risk thresholds. By ingesting thousands of different signals beyond purchase data, machine learning systems can quickly adapt to detect suspicious activity in real time without human intervention.
The expansion of the fraud economy to secure messaging apps showcases how quickly bad actors can shift their tactics. Frustratingly, there's little businesses can do to prevent bad actors from advertising their services to a fraud-curious crowd. But by adopting a more holistic approach to fraud and understanding the signals that precipitate fake purchases, security teams can ensure their home bases — such as websites and apps — are protected.