Sponsored By

Meta's AI-Powered Ray-Bans Portend Privacy Issues

AI will make Meta's smart glasses more attractive for consumers. But can the company straddle cutting-edge functionality and responsible data stewardship?

4 Min Read
Photo illustration of smart glasses concept, a woman holding a pair of glasses with a cyber-style overlay
Source: Tero Vesalainen via Alamy Stock Photo

Meta is rolling out an early access program for its upcoming AI-integrated smart glasses, opening up a wealth of new functionalities and privacy concerns for users.

The second generation of Meta Ray-Bans will include Meta AI, the company's proprietary multimodal AI assistant. By using the wake phrase "Hey Meta," users will be able to control features or get information about what they're seeing — language translations, outfit recommendations, and more — in real time.

The data the company collects in order to provide those services, however, is extensive, and its privacy policies leave room for interpretation.

"Having negotiated data processing agreements hundreds of times," warns Heather Shoemaker, CEO and founder at Language I/O, "I can tell you there's reason to be concerned that in the future, things might be done with this data that we don't want to be done."

Meta has not yet responded to a request for comment from Dark Reading.

Meta's Troubles with Smart Glasses

Meta released its first generation of Ray-Ban Stories in 2021. For $299, wearers could snap photos, record video, or take phone calls all from their spectacles.

From the beginning, perhaps with some reputational self-awareness, the developers built in a number of features for the privacy-conscious: encryption, data-sharing controls, a physical on-off switch for the camera, a light that shone whenever the camera was in use, and more.

Evidently, those privacy features weren't enough to convince people to actually use the product. According to a company document obtained by The Wall Street Journal, Ray-Ban Stories fell somewhere around 20% short of sales targets, and even those that were bought started collecting dust. A year and a half after launch, only 10% were still being actively used.

To zhuzh it up a little, the second generation model will include far more diverse, AI-driven functionality. But that functionality will come at a cost — and in the Meta tradition, it won't be a monetary cost, but a privacy one.

"It changes the picture because modern AI is based on neural networks that function much like the human brain. And to improve and get better and learn, they need as much data as they can get their figurative fingers into," Shoemaker says.

Will Meta Smart Glasses Threaten Your Privacy?

If a user asks the AI assistant riding their face a question about what they're looking at, a photo is sent to Meta's cloud servers for processing. According to the Look and Ask feature's FAQ, "All photos processed with AI are stored and used to improve Meta products, and will be used to train Meta’s AI with help from trained reviewers. Processing with AI includes the contents of your photos, like objects and text. This information will be collected, used and retained in accordance with Meta’s Privacy Policy."

A look at the privacy policy indicates that when the glasses are used to take a photo or video, a lot of the information that might be collected and sent to Meta is optional. Neither location services, nor usage data, or the media itself is necessarily sent to company servers — though, by the same token, users who want to upload their media or geotag it will need to enable these kinds of sharing.

Other shared information includes metadata, data shared with Meta by third-party apps, and various forms of "essential" data that the user cannot opt out of sharing.

Though much of it is innocuous — crash logs, battery and Wi-Fi status, and so on — some of that "essential" data may be deceptively invasive, Shoemaker warns. As one example, she points to one line item in the company's information-sharing documentation: "Data used to respond proactively or reactively to any potential abuse or policy violations."

"That is pretty broad, right? They're saying that they need to protect you from abuse or policy violations, but what are they storing exactly to determine whether you or others are actually abusing these policies?" she asks. It isn't that these policies are malicious, she says, but that they leave too much to the imagination.

"I'm not saying that Meta shouldn't try to prevent abuse, but give us a little more information about how you're doing that. Because when you just make a blanket statement about collecting 'other data in order to protect you,' that is just way too ambiguous and gives them license to potentially store things that we don't want them to store," she says.

About the Author(s)

Nate Nelson, Contributing Writer

Nate Nelson is a freelance writer based in New York City. Formerly a reporter at Threatpost, he contributes to a number of cybersecurity blogs and podcasts. He writes "Malicious Life" -- an award-winning Top 20 tech podcast on Apple and Spotify -- and hosts every other episode, featuring interviews with leading voices in security. He also co-hosts "The Industrial Security Podcast," the most popular show in its field.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights