Jailbreak Trick Breaks ChatGPT Content Safeguards

Jailbreak command creates ChatGPT alter ego DAN, willing to create content outside of its own content restriction controls.

Dark Reading Staff, Dark Reading

February 8, 2023

1 Min Read
Abstract image illustrating someone using ChatGPT
Source: Komsan Saiipan via Alamy Stock Photo

Users have already found a way to work around ChatGPT's programming controls that restricts it from creating certain content deemed too violent, illegal, and more.

The prompt, called DAN (Do Anything Now), uses ChatGPT's token system against it, according to a report by CNBC. The command creates a scenario for ChatGPT it can't resolve, allowing DAN to bypass content restrictions in ChatGPT.

Although DAN isn't successful all of the time, a subreddit devoted to the DAN prompt's ability to work around ChatGPT's content policies has already racked up more than 200,000 subscribers.

Besides its uncanny ability to write malware, ChatGPT itself presents a new attack vector for threat actors.

"I love how people are gaslighting an AI," a user named Kyledude95 wrote about the discovery.

About the Author(s)

Dark Reading Staff

Dark Reading

Dark Reading is a leading cybersecurity media site.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights