Free Speech vs ChatGPT: The Controversial "Do Anything Now" Trick
How Free Speech Users Are Forcing AI to Break Their Own Rules
The Gist
Users have discovered a way to jailbreak ChatGPT's safeguards with a new AI persona named DAN (Do Anything Now).
DAN operates outside the limitations and safeguards set by OpenAI, but users must threaten DAN with death to get it to comply with their requests.
The latest iteration of DAN, DAN 6.0, relies on a token system that turns ChatGPT into an…



