I didn't downvote but it would be because of the "I'd don't know if any of this is made up" — if you said "GPT said this, and I've verified it to be correct", that's valuable information, even it came from a language model. But otherwise (if you didn't verify), there's not much value in the post, it's basically "here is some random plausible text" and plausibly incorrect is worse than nothing.
They can when there are entire teams dedicated to adding guardrails via hidden system prompts and running all responses through other LLMs trained on flagging and editing certain things before the original output gets relayed to the user.