Prompt leaks like this are never hallucinations in my experience.
LLMs are extremely good at repeating text back out again.
Every time this kind of thing comes up multiple people are able to reproduce the exact same results using many different variants of prompts, which reinforces that this is the real prompt.
LLMs are extremely good at repeating text back out again.
Every time this kind of thing comes up multiple people are able to reproduce the exact same results using many different variants of prompts, which reinforces that this is the real prompt.