Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I love what Anthropic and Dario are doing and from a business perspective this makes perfect sense. But AI is the last thing the military should be touching.

If there's even a half percent chance that a mistake is made, it could be irreversibly destructive. Doubly so if "trusting the AI" becomes a defacto standard decades down the road. Even scarier is that "the AI told us to do it" is basically a license to cause chaos with zero accountability.




> If there's even a half percent chance that a mistake is made, it could be irreversibly destructive

Yes, that’s war. And soldiers with scopes have a hell of a higher error rate than 0.5%.

> scarier is that "the AI told us to do it" is basically a license to cause chaos with zero accountability

Only to the extent following orders is. (Which is, to be clear, pretty unconstrained.)




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: