Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Prompt leaks like this are never hallucinations in my experience.

LLMs are extremely good at repeating text back out again.

Every time this kind of thing comes up multiple people are able to reproduce the exact same results using many different variants of prompts, which reinforces that this is the real prompt.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: