US models align with our "average" (western) values. If we outsource thinking by using LLMs, why would we outsource it to an LLM that doesn't have our values encoded in it?
I remember asking Gemini about that one famous 9/11 joke from late Norm MacDonald and it got really iffy about answering. Told it that hey I'm not american and in our culture it's not such a taboo.
It doesn't look like self censoring at all - basically you want the default behavior of llms to gamble on the ethnicity of someone based on how they look.
Grok used a book as a reference.
It's not like ethnicity is a fact you infer from looking at someone.
Now ask Deepseek about what happened in Tiananmen Square and watch what censorship actually looks like.
It literally knows the facts, but then there's a layer that prevents it from stating the facts.
That's censorship.
It's not an opinion, it's not a choice when facing a gradient, it's just an historical known fact.