As is typical with emerging technologies, both innovators and regulators struggle with developments in generative AI, much less the rules that should govern its use.

Richard Searle, Vice President of Confidential Computing, Fortanix

April 20, 2023

6 Min Read
ChatGPT logo
Source: Kashif Khan via Alamy Stock Photo

It has been a busy time in the world of large language models. In late March, OpenAI disclosed a breach of users' personal data. ChatGPT does not appear to have issued an independent apology. As a mere algorithm, how could it? "Right" and "wrong" are only known to it by the artifice of its content filters.

OpenAI has reasserted its commitment to protecting users' privacy and keeping their data safe; however, it has since been reported that Samsung employees exposed confidential meeting minutes and source code via prompts sent to ChatGPT. OpenAI warns against the use of sensitive information in conversations, but there are clearly significant risks that the AI industry needs to address in the development of models and services. Garante, the Italian data protection authority, issued a ruling to prevent use of ChatGPT at the end of March, and the Office of the Privacy Commissioner of Canada has launched its own investigation into ChatGPT's privacy implications. The eventual consequences for OpenAI and the AI community remain to be seen.

Soon after reporting of the initial OpenAI breach came the Future of Life Institute's open letter, calling for a six-month halt to what they termed "giant AI experiments." The letter was signed by many of the signatories to a 2015 Future of Life Institute open letter, warning of the dangers of autonomous weapons equipped with AI to cause harm. The sentiments expressed in the letter are not universally held, with two doyens of the AI industry, Andrew Ng and Yann LeCun, jointly communicating their disagreement with calls for a pause in research.

With the Future Life Institute's open letter establishing a development threshold of AI systems "more powerful than GPT-4," there must have be some degree of anxiety among those working on Google's Bard and Microsoft's Copilot systems, which were overlooked. While it is fortunate that the generative pretrained transformer (GPT) class of models are, as yet, without ego, given the coverage they are receiving within the technical community and the popular press; it is appropriate to reflect on the ChatGPT data breaches, alongside the new capabilities announced in GPT-4.

Wider Concerns

The data breach reported by OpenAI occurred against a background of wider concern about the societal implications, regulatory oversight, governance, and practical application of powerful AI systems. Analysis from Goldman Sachs suggested that generative AI systems could deliver a 7% increase in global GDP over a 10-year period, while simultaneously affecting the jobs of 300 million people. The UK government's recent white paper (PDF) has also been contrasted to the interventionist approach reflected in the proposed EU Artificial Intelligence Act.

In OpenAI's disclosure of the ChatGPT data breach, it is important to note their observation that the bug "allowed some users to see titles from another active user's chat history," and, possibly, "the first message of a newly-created conversation was visible in someone else's chat history if both users were active around the same time."

While the bug is resolved, the risks to privacy and confidentiality in centralized AI systems have been exposed along with users' data. GPT-4's reported performance data is impressive, and the prospect of multi-modal prompts, comprising images and text, extends the scope of potential applications. But what happens to all that prompt data? What does it say about the user? What might that prompt data disclose? And, what are the implications of all that generated probabilistic output for the veracity of tomorrow's information domain?

Questions Raised

Important questions are also raised about the source of the massive volumes of training data required for initial development. Has informed consent been obtained for the use of this data? Are copyrights and other intellectual property protections respected? How is the veracity of training data established and the accuracy of predictions evaluated? None of these questions are easily answered by the current state of the art for predictive text sequencing, where their ethical foundation is at best opaque and the risk of future self-reference looms large as all the probabilistic and erroneous responses churned out by ChatGPT and GPT-4 enter the corpus of training data.

OpenAI openly states on the ChatGPT FAQs page that user conversations are retained for training and users should avoid including sensitive data in prompts, to avoid the problems experienced by Samsung. But what constitutes sensitive data, beyond the obvious information that is subject to regulatory control or that explicitly identifies the user? Well, every prompt discloses some information about the user sand their unique interests. The language, phraseology, and tone of prompts point to the personal characteristics and preferences of the user. Such characterizations can be used to finetune responses to either individual users or derived subsets of the overall user population.

Eventually, imperceptibly unique characteristics contained in the spelling and punctuation tics, image geometry, or source documentation of inputs to GPT services could discern sensitive information, or, worse, enable simulation of a human response that cannot be discriminated by either machine or human judgement.

In parallel, Ilya Sutskever, co-founder and chief scientist of OpenAI, caused another recent stir when he stated that the closed-source stance taken with respect to the architecture of GPT-4 was necessary in response to competitive threats and potential safety implications associated with disclosure of the technology. As is typically the case with processes of innovation, the regulatory environment today is disjointed and in a losing race to catch up with developments in the field. The new capabilities of GPT-4 and services based on the prior generation of GPT technology are imbued with significant risks, many of which remain unforeseen and unimagined. Reduced transparency in model development will not help.

Risky Business

While the risk to data privacy and confidentiality is evident at the implementation level, those risks are an intrinsic component of ChatGPT, GPT-4, and other centralized AI systems. How we address these risks as a society, to exploit the welcome benefits of AI without the sacrifice of cherished human rights, is one of the many questions posed by accelerating advancements in machine intelligence. If we are not sufficiently concerned about the information our human-machine conversations disclose about us today, we might worry more when, as a team of AI researchers have already demonstrated, one day soon, machines will possess the capability to actually read our minds. There is much for humanity and GPT-4 to ponder.

About the Author(s)

Richard Searle

Vice President of Confidential Computing, Fortanix

Dr. Richard Searle is the Vice President of Confidential Computing at Fortanix. He is responsible for leading global projects for Fortanix customers who are deploying Confidential Computing technology to protect data in use and secure sensitive applications across healthcare, financial services, government, and military use cases. Richard is also a serving General Member’s Representative to the Governing Board and Chair of the End-User Advisory Council within the Confidential Computing Consortium of the Linux Foundation.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights