Hacker News new | past | comments | ask | show | jobs | submit login

OpenAI may not be the culprit. Patrick Boyle claims ChatGPT answers the question a little differently. It has been proposed that they actually asked Grok, which would certainly be on-brand. So long as there is doubt over which LLM they actually used, there's plausible deniability over where to point the finger.

But that's not the real story of all this.

An AI just made a government decision at the highest level for one of the most powerful countries on the planet, affecting the lives of billions of people.

All it needed was some stooge to ask it for assistance.

Safety researchers must be kicking themselves right now. They've spent years fixating on misalignment and dishonesty[1] when the greatest danger of AI is actually the staggering fallibility of human operators[2].

[1] https://memory-alpha.fandom.com/wiki/The_Ultimate_Computer_(...

[2] https://memory-alpha.fandom.com/wiki/The_Apple_(episode)




As the saying goes, never attribute to malice that which can adequately explained by incompetence.


I think that is wrong as a rule. Things can be due to malice and pretending they are not is part of the problem.


I agree it isn't foolproof and the cases where it doesn't apply can set you up for extra damage if you aren't expecting bad faith.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: