Online Fraud: Now a Major Application Layer Security Problem
The explosion of consumer-facing online services and applications is making it easier and cheaper for cybercriminals to host malicious content and launch attacks.
Online fraud is a subset of cybercrime that typically takes place at the application layer. Although fraud was commonly associated with scams (for instance, Nigerian fraud), fraudulent transactions, and identity theft in the past, its "potential" has exploded in recent years — thanks to the many new ways to cash in on consumer-facing online services and applications.
Many of these new fraud attacks have already made headlines:
Fake reviews and purchases to artificially boost a product or seller's ranking
Fake accounts created to take advantage of sign-up promotions/bonuses
Fraudulent listings with counterfeit products or attractive prices to lure buyers off the platform into under-the-table (and potentially unsafe) transactions
Bots generating artificial clicks, installations, and app engagement
Virtual items in online games traded or resold for profit
Fraudulent transactions
Fraudulent credit card and bank account openings from stolen and/or fake identities.
The list goes on. But unlike other types of cybercrime that "hack" into a network or system by obtaining unauthorized access, these new fraud attacks can be launched by simply registering user accounts and abusing available product features offered by online services and applications. The online services have become a part of the attack platform. For cybercriminals, why pay for bulletproof hosting when you can freely and anonymously put up content on social networks and peer-to-peer marketplaces?
This shift away from specialized attack infrastructure means that blacklists and reputation lists traditionally used for detection are becoming ineffective. Fraudsters no longer need to maintain dedicated servers for hosting malicious content or launching attacks, and they can afford to switch up their operation frequently. In DataVisor's recent Fraud Index Report, the median lifetime of a fraudulent IP address is reported to be only 3.5 days. As long as cybercriminals can access the online services and applications — either through anonymous proxies, peer-to-peer community VPNs, or even directly from their home network — the attack is possible.
Attacking at the Application Layer
Attacking at the application layer gives fraudsters a greater chance of blending in with normal users. It is difficult to tell whether an HTTP connection is generated by a human or a script, just as it is difficult to distinguish between a fake user account and a real one.
The application layer, which supports a variety of communication protocols, interfaces, and access by end users, has the widest attack surface. In addition to the application code, vulnerabilities could also exist in access control and web/mobile APIs. Attacks involving authorized users that have already logged in — such as the fraud attacks that leverage user accounts on consumer-facing online services — are the most difficult to prevent and detect.
Depending on the actions and features available on the online service or application, fraudulent accounts can perform a variety of benign actions to stay under the radar. Many lie in wait for weeks, months, or even years before launching the attack. For example, financial fraudsters open multiple credit cards using synthetic identities and accumulate credit history over time, only to cash out their credit limit and disappear. In another example, we have observed fake accounts created on social network sites becoming active after three years to update their profile information with phishing URLs.
These attacks are challenging to detect even for machine learning models. One aspect of this is due to how models "learn" to identify fraudulent and malicious activities. In many popular machine learning applications, such as image recognition or natural language processing, the labels are well-defined and unambiguous; an image of a chicken shows a chicken, not a duck. By giving many examples of "chicken" to the model, we can have pretty good confidence that it will learn to recognize chickens.
However, there is no single definition of fraud or fraudulent behavior. Thus, when applying machine learning to fraud, the labels are noisier.
Changing Attack Dynamics
A second challenge is due to the dynamic nature of attacks. Without constraints on dedicated attack infrastructure, fraudsters can adapt their operation in a much faster manner to exploit loopholes in the applications. Relying on historical examples of attacks means that the model is always operating based on outdated information, limiting its effectiveness against future attacks.
To deal with sophisticated, fast-evolving online attacks, a robust solution should incorporate multiple layers of defenses. Adopting a strong authentication system, reviewing all API accesses, and performing automated code testing helps to establish a solid baseline defense. Also, organizations must vet developers and third-party apps, be aware of access given on nonstandard interfaces, and understand the types of attacks happening on your service or application to make an educated choice about the type of solution to implement.
To further address abuse involving authorized users, adopt advanced behavior profiling for a holistic analysis of user activities. Online fraud attacks are often performed at scale, involving hundreds to thousands of fraudulent accounts. These "bot" accounts are likely to exhibit behaviors that are very different from those of normal users. Explore technology solutions that focus on data analytics and uncovering new insights rather than the detection of known, recurring attack patterns alone.
It's no longer enough to keep up with online fraud. In fact, if you are just keeping up, you're already behind.
Related Content:
About the Author
You May Also Like
Cybersecurity Day: How to Automate Security Analytics with AI and ML
Dec 17, 2024The Dirt on ROT Data
Dec 18, 2024