The marketplace for malicious Google Play applications and app-takeover tools is thriving, thanks to novel hacking techniques and lax enterprise security.

4 Min Read
hand holding an android phone with android logo, against the backdrop of a screen that says "malware"
Source: rafapress via Shutterstock

Cybercriminals are finding ways around the official Google Play app store's security, developing tools for trojanizing existing Android applications and selling their malicious wares for up to $20,000 a piece on cybercrime markets.

In an April 10 blog post, researchers from Kaspersky published the results of a broad study of nine of the most popular Dark Web forums. Tracking activity from 2019 and 2023, they found a thriving marketplace of buyers and sellers trading access to app developer accounts, botnets, and malicious Android applications, sometimes for thousands of dollars at a time.

In some cases, particularly useful wares — like source code that can burrow you into an existing cryptocurrency or dating app on Google Play — are going for multiple thousands of dollars.

"It's an infinite cat and mouse game," Kaspersky researcher Georgy Kucherin says of Google's app security. "The attackers find a way to bypass security scanners. Then the people developing the security scanners deploy patches to ensure that doesn't happen again. Then the attackers find new flaws. And it goes on and on."

A Google spokesperson tells Dark Reading, “Google Play has policies in place to keep users safe that all apps must adhere to. All Android apps undergo security testing before appearing in Google Play. We take security and privacy claims against apps seriously, and if we find that an app has violated our policies, we take appropriate action. Users are also protected by Google Play Protect, which can warn users or block identified malicious apps on Android devices.”

The Marketplace for Google Play Hacks

Any software uploaded to Apple's or Google's app stores is subject to rigorous vetting.

"But just like any security solution that exists in the world, it's not 100% effective,” according to the Kaspersky researchers. "Every scanner contains flaws that threat actors exploit to upload malware to Google Play."

Generally speaking, there are two common ways to sneak malware onto an app store.

Method number one involves uploading a perfectly harmless app to the marketplace. Then, after it's approved — or, even better, after it's accrued a substantial-enough audience — hackers push an update containing malicious code.

Alternatively, hackers might compromise legitimate app developers, latching onto their accounts to upload malware to existing apps. App developer accounts can be cracked more easily without strong password policies and two-factor authentication in place. In some cases, credential leaks can do most of the job for the hackers, providing the logins necessary for breaching accounts and sensitive corporate development systems.

Depending on the developer, access to a "GP" (Google Play) account might only cost as little as $60, as seen in the example Dark Web listing below. But other, more useful accounts, tools, and services come with much higher price tags.

For instance, because of the power they afford, loaders — the software necessary to deploy malicious code into an Android app — can go for big money on the cybercrime underground. This listing offered loaders for rent or development for up to $5,000 each.

a Dark Web advertisement in Russian

Well-resourced criminals might shell out for a premium package, like the source code for a loader.

"You can do whatever you want with that — deploy it to as many apps as you want," Kucherin explains. "You can modify the code as much as you want, adapting it to your needs. And the original developer of the code may even provide support, like updates for the code, and maybe new ways to bypass security measures."

This full-service Google Play hacking suite can run you up to $20,000.

How Businesses Can Protect Against Google Play Threats

The threats in Google Play are of particular concern to organizations with weak enterprise security. Kucherin points out that many businesses still have lax bring-your-own-device arrangements in place, which extend the security perimeter beyond corporate networks and into the hands of employees, literally.

"Say an employee installs a malicious app on the phone," Kucherin posits. "If this app turns out to be a stealer, cybercriminals can get access to, for example, corporate emails or sensitive corporate data, then they can upload it to their servers and sell it on the Dark Web. Or even worse: An employee might keep their passwords in, for example, their phone's notes app. Then hackers can steal those notes and get access to corporate infrastructure."

There are two simple ways to prevent such an outcome, he adds.

For one "you can teach the employees cyber-hygiene principles, like to not download apps that are not trusted," Kucherin says.

That may not be enough, though, so "another thing you can do — though it's more expensive — is give your employees a separate phone, which they will use only for purposes of work. Those devices will contain a limited number of apps — just the essentials like email, phone, no other apps allowed."

Just as it is for the cybercriminals, you have to pay more to get more, he notes: "Using dedicated work devices is more effective, but more expensive."

About the Author(s)

Nate Nelson, Contributing Writer

Nate Nelson is a freelance writer based in New York City. Formerly a reporter at Threatpost, he contributes to a number of cybersecurity blogs and podcasts. He writes "Malicious Life" -- an award-winning Top 20 tech podcast on Apple and Spotify -- and hosts every other episode, featuring interviews with leading voices in security. He also co-hosts "The Industrial Security Podcast," the most popular show in its field.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like

More Insights