OpenAI's New GPT Store May Carry Data Security Risks

Third-party developers of custom GPTs (mostly) aren't able to see your chats, but they can access, store, and potentially utilize some other kinds of personal data you share.

4 Min Read
OpenAi logo on a cell phone in front of background will purple stripes that says GPT
Source: SOPA Images Limited via Alamy Stock Photo

A new kind of app store for ChatGPT may expose users to malicious bots, and legitimate ones that siphon their data to insecure, external locales.

ChatGPT's fast rise in popularity, combined with the open source accessibility of the early GPT models, widespread jailbreaks, and even more creative workarounds led to a proliferation of custom GPT models (legitimate and malicious) in 2023. Until now, they were shared and enjoyed by individual tinkerers scattered around different corners of the internet.

The GPT store, launched yesterday, allows OpenAI subscribers to discover and create custom bots (simply, "GPTs") in one place. But being under OpenAI's umbrella doesn't necessarily mean that these will provide the same levels of security and data privacy that the original ChatGPT does.

"It was one thing when your data was going to OpenAI, but now you're expanding into a third-party ecosystem," warns Alastair Paterson, CEO of Harmonic Security, who wrote a blog post on the subject on Jan. 10. "Where does your data end up? Who knows at this point?"

Looks, Acts Like ChatGPT, But Not ChatGPT

OpenAI has not escaped its fair share of security incidents, but the walled garden of ChatGPT inspires confidence for users who like sharing personal information with robots.

The user interface for GPTs from the GPT store is the same as that of the proprietary model. This benefit to user experience, though, is potentially deceptive where security is concerned.

Paterson "was playing around with it yesterday for a little while. It's like interacting with ChatGPT as usual — it's the same wrapper — but actually, data you're putting into that interface can be sent to any third party out there, with any particular usage in mind. What are they going to do with that data? Once it's gone, it's completely up to them."

Not all your data is accessible to the third-party developers of these bots. As OpenAI clarifies in its data privacy FAQs, chats themselves will largely be protected: "For the time being, builders will not have access to specific conversations with their GPTs to ensure user privacy. However, OpenAI is considering future features that would provide builders with analytics and feedback mechanisms to improve their GPTs without compromising privacy."

API-integrated functionalities are a different story, though, as "this involves sharing parts of your chats with the third-party provider of the API, which is not subject to OpenAI's privacy and security commitments. Builders of GPTs can specify the APIs to be called. OpenAI does not independently verify the API provider's privacy and security practices. Only use APIs if you trust the provider."

"If I was an attacker, I could create an app encouraging you to upload documents, presentations, code, PDFs, and it might look relatively benign. It could even encourage you to put out customer data or IP or other sensitive material that you can then use against employees or companies," Paterson posits.

Further, because the company plans to monetize based on engagement, attackers might try to develop addictive offerings that conceal their maliciousness. "It's going to be interesting whether that monetization model is going to drive up some bad apps," he says.

More Apps, More Problems

OpenAI isn't the first company with an app store. Whether its controls are as stringent as those of Apple, Google, and others, though, is a question.

In the two months since OpenAI introduced customizable GPTs, the company claims, community members have already created more than 3 million new bots. "It seems like a very lightweight verification process for getting an app on that marketplace," Paterson says.

In a statement to Dark Reading, a representative of OpenAI told Dark Reading: "To help ensure GPTs adhere to our policies, we've established a new review system in addition to the existing safety measures we've built into our products. The review process includes both human and automated review. Users are also able to report GPTs."

Despite his concerns about the vetting process, Paterson admits that one potential upside from the creation of the app store is it may raise the bar on third-party applications. "As soon as ChatGPT came out there was a plethora of third-party apps for basic functions like chatting with PDFs and websites, often with poor functionality and dubious security and privacy measures," he says. "One hope for the app store would be that the best ones should float to the top and be easier to discover for users."

Paterson says that doesn't mean the apps will necessarily be secure. "But I would hope that the most popular ones may start to take data protection seriously, to be more successful," he adds.

About the Author(s)

Nate Nelson, Contributing Writer

Nate Nelson is a freelance writer based in New York City. Formerly a reporter at Threatpost, he contributes to a number of cybersecurity blogs and podcasts. He writes "Malicious Life" -- an award-winning Top 20 tech podcast on Apple and Spotify -- and hosts every other episode, featuring interviews with leading voices in security. He also co-hosts "The Industrial Security Podcast," the most popular show in its field.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights