Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Risk

Apple Yanks Privacy Watcher From App Store

Clueful privacy app reported on free iOS apps' data-gathering practices, found 41% tracking users' location.

Want to install a well-reviewed iPhone app that looked at the free apps installed on your smartphone, then told you which ones might be slurping excessive amounts of personal information?

Previously, you could tap Clueful ($3.99) from Apple's App Store. The app, developed by Romanian security software developer Bitdefender, "looks at what apps are on your iPhone and then fetches privacy details about them," according to the company's website. Those privacy details are based on Bitdefender's analysis of the data-collection practices of about 60,000 free iOS apps.

As of June 30, however, Apple dropped Clueful from its App Store, although anyone who had already bought the app can continue to use it. What was the impetus for the privacy-watching app's takedown? "Apple informed Bitdefender's product development team of the removal--for reasons we are studying--after it was approved under the same rules," according to a statement released by Bitdefender, which said it's signed a related non-disclosure agreement with Apple. "We are working hard toward understanding why our app was removed and to develop the app to improve its chances of staying there."

[ Mobile security is a serious problem. See Android Apps Need Universal Encryption. ]

Why bother watching what apps can access? Answering that question depends on how much trust users put both in the developers behind an app--especially a free one--as well as in Apple's app-review process. While Apple hasn't explicitly detailed what its app-review teams test before approving or rejecting an app or app update, it's possible that Apple puts submitted apps to a privacy test. Then again, it's also possible that Apple doesn't check for questionable data-gathering practices.

Clueful, however, offered to provide greater clarity on the matter, thanks to Bitdefender's iOS app analysis, which logged which apps could access a user's iPhone address book, which use analytics or track a user's location, and which can access Facebook or Twitter credentials. It also noted which apps display advertisements, as well as apps with the potential to drain excessive amounts of battery life via their use of background services, GPS, or audio.

"While most app developers use this information for legitimate purposes, others might not," said Catalin Cosoi, Bitdefender's chief security researcher, in a blog post. Or as the Clueful FAQ noted, "an app that provides backup for your contacts has every right to access your entire address book, but why should a flashlight app do the same?"

Interestingly, Bitdefender's related analysis of the 60,000 iOS apps, conducted in recent months, uncovered some significant privacy concerns. For starters, 43% of iOS apps didn't encrypt people's personal data when it was being transmitted. As a result, if the user was on an unsecured Wi-Fi connection, the transmitted personal information could be sniffed by an attacker. Bitdefender also found that 41% of apps were tracking a user's location, and almost 20% had full access to a user's iPhone address book. Note that Bitdefender so far has only analyzed free apps, since "these were judged more likely to be shady or downright malicious," compared with paid apps, said Bitdefender's Stoica Razvan via email. But Bitdefender said it plans to begin scanning paid apps in the future.

Based on the Bitdefender research, many developers seem to have programmed their apps to collect more data than they should require. That finding is backed up by another study, released earlier this year by South Korean antivirus vendor AhnLab, which scanned more than 150 top-rated Android apps, and found that 43% were requesting "excessive permissions," based on what the app said it was designed to do. Likely explanations range from developers wanting to collect as much information as possible on users for potential marketing purposes, or simply just that from a coding perspective, it's faster to just grab a lot of data, then use what's required, rather than carefully limiting what gets collected in the first place.

That information security issue isn't limited to Android apps. Earlier this year, for example, a security researcher found that multiple iOS apps, including Path and Hipster, were transmitting people's personal information to the developers' servers, without clearly labeling what they were doing or why. In response, the developers updated their apps to make it clearer how certain settings--such as "find friends"--would lead to parts of the iPhone address book being transmitted to the app developer's servers.

The widespread lack of transparency in how mobile apps are collecting people's personal information may soon be curtailed, however, thanks to a mobile app privacy program launched earlier this year by the California attorney general. To date, Amazon, Apple, Google, Facebook, HP, Microsoft, and Research In Motion have agreed to participate in the program, which was developed out of a settlement by the state with mobile app distributors.

The state found that many mobile apps were collecting personal data from consumers, but not clearly disclosing what was being collected in their privacy policy. Accordingly, the aforementioned mobile app distributors have agreed to require any developers who distribute apps with their services to clearly state--in a related privacy policy--what the app collects. The distributors will also provide mechanisms for consumers to report any abuse on the part of developers.

On a related note, California's attorney general's office Thursday announced the launch of a new privacy enforcement and protection unit, which it said "will focus on protecting consumer and individual privacy through civil prosecution of state and federal privacy laws." The office will be staffed by a full-time team that includes six prosecutors who will focus on enforcing privacy laws.

Comment  | 
Print  | 
More Insights
Comments
Threaded  |  Newest First  |  Oldest First
COVID-19: Latest Security News & Commentary
Dark Reading Staff 9/25/2020
Hacking Yourself: Marie Moe and Pacemaker Security
Gary McGraw Ph.D., Co-founder Berryville Institute of Machine Learning,  9/21/2020
Startup Aims to Map and Track All the IT and Security Things
Kelly Jackson Higgins, Executive Editor at Dark Reading,  9/22/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
Special Report: Computing's New Normal
This special report examines how IT security organizations have adapted to the "new normal" of computing and what the long-term effects will be. Read it and get a unique set of perspectives on issues ranging from new threats & vulnerabilities as a result of remote working to how enterprise security strategy will be affected long term.
Flash Poll
How IT Security Organizations are Attacking the Cybersecurity Problem
How IT Security Organizations are Attacking the Cybersecurity Problem
The COVID-19 pandemic turned the world -- and enterprise computing -- on end. Here's a look at how cybersecurity teams are retrenching their defense strategies, rebuilding their teams, and selecting new technologies to stop the oncoming rise of online attacks.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-15208
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, when determining the common dimension size of two tensors, TFLite uses a `DCHECK` which is no-op outside of debug compilation modes. Since the function always returns the dimension of the first tensor, malicious attackers can ...
CVE-2020-15209
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, a crafted TFLite model can force a node to have as input a tensor backed by a `nullptr` buffer. This can be achieved by changing a buffer index in the flatbuffer serialization to convert a read-only tensor to a read-write one....
CVE-2020-15210
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, if a TFLite saved model uses the same tensor as both input and output of an operator, then, depending on the operator, we can observe a segmentation fault or just memory corruption. We have patched the issue in d58c96946b and ...
CVE-2020-15211
PUBLISHED: 2020-09-25
In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices f...
CVE-2020-15212
PUBLISHED: 2020-09-25
In TensorFlow Lite before versions 2.2.1 and 2.3.1, models using segment sum can trigger writes outside of bounds of heap allocated buffers by inserting negative elements in the segment ids tensor. Users having access to `segment_ids_data` can alter `output_index` and then write to outside of `outpu...