The online fraud prevention industry has taken the brunt of increased privacy actions.

Ayan Halder, Principal Product Manager, Arkose Labs

May 5, 2023

5 Min Read
1s and 0s over people's heads, indicating data privacy issues
Source: Brain light via Alamy Stock Photo

Recently, Meta agreed to pay $725 million to settle the privacy suit over the Cambridge Analytica scandal, which became famous over alleged voter profiling and targeting during the 2016 US presidential election. The discussions on privacy and the illegal use of personal data have evolved so much since 2016 that Apple and Google have been moving toward more privacy-centric solutions. Apple's Safari blocks third-party cookies by default, and Google's Chrome will follow suit starting in late 2024. Several privacy-focused Internet browsers, such as Mozilla's Firefox and Brave, block fingerprinting users by default to preserve consumers' online privacy. However, there's a (security) cost to too much data privacy, and the online fraud prevention industry has taken the brunt of increased privacy actions.

Online fraud has been in the news for a while and is responsible for various nefarious activities ranging from stolen identities to swinging elections. Identity theft alone resulted in more than $6 billion in financial losses for US consumers in 2021.

An online login looks easy. While logging in to an online account, a consumer enters their username, password, and, occasionally, a one-time passcode delivered to their mobile phone or email address. But a complex web of first- and third-party algorithms and humans work in the background to keep that login secure and free from fraudulent attacks. They analyze every incoming request and work to predict the probability of malicious intent — maybe someone is trying to take over a legitimate user's account or is planning to use a stolen credit card for e-commerce transactions.

Online fraud prevention companies depend on the same data sets that companies like Apple and Google harvest, but use them for very different purposes. Take browser cookies, for example. Marketing companies use a cross-site tracking technology that leverages cookies to follow a consumer's footprint across the Internet. This invasive technology is so concerning that the European Union's General Data Protection Regulation (GDPR) mandated businesses to seek explicit permission from consumers while using anything but strictly necessary cookies related to the general functioning of a website. Apple and Google have either moved on or are planning to do so with cross-site tracking cookies. But this move prevents online fraud prevention services that rely on third-party cookies to validate the consumer's entitlement to an online account from providing such a service creating a gap in account security.

The Problem With Broad-Brush Regulation

One of the perils of a broadly defined regulation such as GDPR and the California Consumer Privacy Act (CCPA) is that it's left to interpretation. And the most significant misalignment within the industry is what constitutes "selling of personal data." If proven that a business was selling personal data without explicit consumer consent, the possible penalties are so grave that companies have shied away from one of the ancient concepts of fraud prevention — a consortium. A consortium is a model where members of the system contribute information about known fraudulent consumers so other members can use it. Fraud prevention services use third-party cookies for a similar concept to prevent fraudsters from attacking their customers.

This misalignment puts businesses at a disadvantage against online fraudsters who work together and contribute toward their own consortium, while legitimate companies, due to the nervousness around compliance with various laws, tend to act alone.

Because of the negative sentiments about cookies, marketing companies are moving away from them. While some have adopted privacy-friendly techniques such as the Unified ID 2.0, the vast majority rely on a stateless online fingerprint — a unique identifier generated based on browser, network, and device characteristics for which consumers don't need to provide explicit permissions. Studies show that such identifiers may not rival a cookie but are helpful in the short to medium term.

To counter such privacy-invasive techniques, browsers such as Mozilla's Firefox, Brave, and Tor have implemented default fingerprint alteration techniques that prevent the device and browser from being properly fingerprinted. Online fraudsters know this and heavily leverage these unique features of such browsers to evade fraud prevention systems.

Given the effectiveness of the fingerprint alteration techniques used by some browsers, fraud prevention systems fail to distinguish between a good user and a fraudster, even when it knows abuse is underway. This triggers a brute-force attempt by the fraud mitigation systems to stop the attack, resulting in good users getting caught up in the mix. And when that happens, good users experience unnecessary friction that they're not happy about.

What's Good? What's Bad?

Not being able to distinguish between good and bad users is a limitation that has even more significant consequences when businesses set up their systems to reject transactions. Improper classification leads to loss of revenue, either from restricting good transactions that were classified suspicious or by not being able to stop fraudsters, leading to a chargeback.

Businesses have crossed so many ethical boundaries using privacy-invading techniques for profit that consumers rarely acknowledge, or even know, how it affects their online safety when they tap Ask App Not to Track on their iPhones.

Nevertheless, this can be avoided. GDPR and CCPA (updated to the California Privacy Rights Act, or CPRA, in January 2023) were a blessing to the prevention of abuse of rampant privacy-invading technologies by advertising companies. The same laws, however, need to acknowledge the other side of the coin. GDPR and CPRA need to make exclusions for fraud and abuse prevention companies when it comes to using personal data, and not be so strict that these companies shy away from using the data. As structured today, these privacy regulations actually give fraudsters an advantage. Ethical use of these techniques should be promoted, and strict enforcement of such clauses is necessary to prevent misuse. Ultimately, regulations that protect privacy by sacrificing online identity and financial security are only half effective.

About the Author(s)

Ayan Halder

Principal Product Manager, Arkose Labs

As the Principal Product Manager at Arkose Labs, Ayan leads Detection product strategy and development to help businesses identify and secure against botnet attacks and human-driven fraud. Ayan has over 10 years of professional experience across several domains and over five years exclusively in the fraud detection space at TeleSign and Arkose Labs. He has a deep interest and domain expertise in the fraud detection space and actively writes about the opportunities and challenges of this growing space.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights