Anthropic specifically are the people who talk about "model alignment" and "harmful outputs" the most, and whose models are by far the most heavily censored. This is all done on the basis that AI has a great potential to do harm.
One would think that this kind of outlook should logically lead to keeping this tech away from applications in which it would be literally making life or death decisions (see also: Israel's use of AI to compile target lists and to justify targeting civilian objects).
Why do you think humans would make better life or death decisions? Have we never had innocent civilians killed overseas by US military as a result of human error?
The problem with these things is that they allow humans to pretend that they are not responsible for those decisions, because "computer told me to do so". At the same time, the humans who are training those systems can also pretend to not be responsible because they are just making a thing that provides "suggestions" to humans making the ultimate decision.
With self-driving cars some human will be held responsible in case of the accident, I hope. Why would it be different here? It seems like a responsibility problem, not a technology one.
I'm not talking about matter of formal responsibility here, especially since the enforcing mechanisms for stuff like war crimes are very poor due to the lack of a single global authority capable of enforcing them (see the ongoing ICC saga). It's about whether people feel personally responsible. AI provides a way to diffuse and redirect this moral responsibility that might otherwise deter them.
I hear where you are coming from, but if an AI company is going to be in this field, wouldn't you want it to be the company with as many protections in place as possible to avoid misuse?
We aren't going to stop this march forward, no matter how much it is unpopular it will happen. So, which AI company would you prefer be involved with DOD?
"Avoid misuse"? This is the United States Military we're talking about here. They're directly involved in the ongoing genocide in Gaza at this very moment. There is no way to be ethically involved. Their entire existence is "misuse".
One would think that this kind of outlook should logically lead to keeping this tech away from applications in which it would be literally making life or death decisions (see also: Israel's use of AI to compile target lists and to justify targeting civilian objects).