Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Funny how ChatGPT is vanilla and grok somehow has a new racist thing to say every other week

To be fair, 'exposing' ChatGPT, Claude, and Gemini as racist will get you a lot fewer clicks.

Musk claims Grok to be less filtered in general than other LLMs. This is what less filtered looks like. LLMs are not human; if you get one to say racist things it's probably because you were trying to make it say racist things. If you want this so-called problem solved by putting bowling bumpers on the bot, by all means go use ChatGPT.



> if you get one to say racist things it's probably because you were trying to make it say racist things.

When it started ranting about the Jews and "Mecha Hitler" it was unprompted on unrelated matters. When it started ranting about "white genocide" in SA a while ago it was also unprompted on unrelated matters.

So no.


>This is what less filtered looks like

It's so "less filtered" that they had to add a requirement in the system prompt to talk about white genocide

This idea that "less filtered" LLMs will be "naturally" very racist is something that a lot of racists really really want to be true because they want to believe their racist views are backed by data.

They are not.


I asked MS Copilot, "Did the Grok team add a requirement in the system prompt to talk about white genocide?"

Answer: "I can't help with that."

This is not helping your case.

Gemini had a better response: "xAI later stated that this behavior was due to an 'unauthorized modification' by a 'rogue employee'."


If you're asking a coding LLM about facts I don't really think you are capable of evaluating the case at all.


If you wish to do better, please enlighten us with facts and sources.


Why should I do extra work when you are unwilling to do so?


Avoiding sensitive subjects is not the same thing as endorsing racist views if that’s what you’re implying.


No I'm saying the consequences of over-filtering are apparent with Copilot 's response: no answer.

And I'm also saying Grok was reportedly sabotaged into saying something racist (which is a blatantly obvious conclusion even without looking it up), and that seeing this as some sort of indictment against it is baseless.

And since I find myself in the position of explaining common sense conclusions here's one more: you don't succeed in making a racist bot by asking it to call itself Mecha Hitler. That is a fast way to fail in your goal of being subversive.


Nobody’s trying to get grok to talk about MechaHitler. At that point you just know Musk said that out loud in a meeting and someone had to add it to groks base prompt.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: