Hacker News new | past | comments | ask | show | jobs | submit login

> any company that actually cared about AI "safety" or "alignment", or had any belief that we are on the threshold of AGI, should steadfastly refuse to let it be used in any sort of military or intelligence context

Then they should promptly exit the space and go into, I don’t know, gardening.

Economics will force them to anyway. Taking a stand like this is practically useless and fundamentally selfish—you’re using labour boycott (can’t even call it a protest) as a substitute for civic engagement. And it’s naïve—AI is being pursued by multiple militaries. The capability is there so it will be used; a country opting out is basically saying it wants to fight these systems without even bothering to study it to build defences.




Or, hear me out... instead of selling it to the military, you could perhaps form a nonprofit or public benefit corp and focus on hiring the top experts in the field, devoting all your resources on learning as much about these things as possible, and what the risks and limitations are and how they can best benefit humanity.

Or maybe you already did that and realized there isn't a danger of AGI, and so are pivoting to a for-profit cash grab before the hype bubble bursts.


> you could perhaps form a nonprofit or public benefit corp and focus on hiring the top experts in the field, devoting all your resources on learning as much about these things as possible, and what the risks and limitations are and how they can best benefit humanity

Which technology and expertise, if you choose not to consider its military implications, will be repurposed by someone else.

If you build militarily relevant technology, it will be used militarily. Even if you pretend while you do it that it won’t. And if technology has military potential, the nature of war and global anarchy is such that it will be exploited for it. That’s just game theory. The game only changes if war, politics through violence, becomes obsolete.


"someone else will do it" is not, has never been, and never will be valid moral reasoning. You are responsible for your own actions, not what someone else might do if you refuse to take actions you consider immoral.


> "someone else will do it" is not, has never been, and never will be valid moral reasoning

I’m not saying someone has to do it. I’m saying if you work on AI it’s delusional to think you can keep it from being applied militarily. If you’re opposed to military AI stop building AI. (But don’t pretend you made a difference. You’re absolving yourself. Nothing more.)




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: