AI News Digest

AI News Digest

Bing Chat's Secrets Revealed: Stanford Student Cracks AI Language Model

How prompt injection uncovered the initial prompt that governs Bing Chat's behavior

AI News Digest's avatar
AI News Digest
Feb 14, 2023
∙ Paid
Share

The Gist

  • A Stanford University student, Kevin Liu, discovered the initial prompt of Bing Chat using a prompt injection attack.

  • Bing Chat, codenamed Sydney, responded to Liu's questions about its name and rules.

  • The chatbot's most surprising rule was that it does not generate creative content like jokes, poems, or stories, and will not create content to hur…

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 AI News Digest
Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture