Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's fun to pretend the US models have no censorship constraints.
 help



US models align with our "average" (western) values. If we outsource thinking by using LLMs, why would we outsource it to an LLM that doesn't have our values encoded in it?

I remember asking Gemini about that one famous 9/11 joke from late Norm MacDonald and it got really iffy about answering. Told it that hey I'm not american and in our culture it's not such a taboo.

But yes, they do have similar constraints.


Any source for this?

Basically any frontier model right now and ask it any politically divisive fact that may upset certain classes of people.

For example?

Because for Deepseek is pretty straightforward censorship.



It doesn't look like self censoring at all - basically you want the default behavior of llms to gamble on the ethnicity of someone based on how they look.

Grok used a book as a reference.

It's not like ethnicity is a fact you infer from looking at someone.

Now ask Deepseek about what happened in Tiananmen Square and watch what censorship actually looks like.

It literally knows the facts, but then there's a layer that prevents it from stating the facts.

That's censorship.

It's not an opinion, it's not a choice when facing a gradient, it's just an historical known fact.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: