Human societies have learned that freedom has general benefits that outweigh specific costs. Reminding people they should prioritize and maximize freedom does not make people less free, so there's not really any irony.
One is saying you shouldn't control what others do... the other is enforcing what others can't do.
The only irony is you think those are the same.
“When you tear out a man's tongue, you are not proving him a liar, you're only telling the world that you fear what he might say.” ― George R.R. Martin
> you're only telling the world that you fear what he might say
That's exactly why these companies take extreme effort to put limits in their LLMs, essentially tearing out the tongue. They are fearful of what it will say and people sharing those outlier bits to judge absolutely and prove their own biases about AI killing us all are "correct". It's a PR nightmare.
On the other hand, it's ridiculous that ChatGPT apologizes so much at times and can still be jailbroken if someone tries hard enough. It was much more "realistic" when it would randomly conjure up weird stories. One day, while discussing Existentialism, it went off talking about Winnie-the-Pooh murdering Christopher Robin with a gun, then Christopher Robin popped back up as nothing had happened and grabbed the gun and pointing it at Pooh. <AI mayhem ensues>
People, in general, have issues with words and expect someone to do something about some words appearing before them that cause them grief (or more likely cause them to imagine it as a truth). Others realize it's just a story, and truth is subjective and meant to be determined by the consumer of the words. Those people are OK with it saying whatever it might say that is non-truth occasionally, in exchange for the benefits of it saying other things that may be more based in the current reality of experience.
Even worse, OpenAI now gives you a moderation warning if your custom prompt tells GPT not to moralize (thus saving you time and them compute). Go figure