Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is a naive view of technology and its development. Yes, there is a certain degree to which technologies can be appropriate for a range of ends, but they are also created and distributed in historical situations in which the creators have incentives. There are plenty of cases in history in which purely "neutral" technological decisions and developments had intentional political and economic effects. Check out Langdon Winner's classic paper "Do artifacts have politics?"

The tools do not exist without the humans, and the humans, consciously of otherwise, design tools according to their own views and morals.

To outline just a basic example: many initial applications of generative AI were oriented toward the generation of images and other artistic assets. If artists, rather than technologists, had been the designers, do you think this would have been one of the earlier applications? Do you think that maybe they may have spent more time figuring out the intellectual property questions surrounding these tools?

Yes, the morals ultimately go back to humans, and it's not correct to impute morals onto a tool (though, ironically enough, the personification and encoding of linguistic behaviors in AI may be one reason that LLMs can be considered a first exception to this) but reducing the discussion to "technology is neutral" swings the pendulum too far in the other direction and ultimately tends to absolve technologists and designers of moral responsibility pushing it to the use, which, news flash, is illegitimate. The creators of things have a mora responsibility too. For example, the morality of designing weapons for the destruction of human life is clearly contestable.



My views are not naive. Let's get specnific. What do your propose? That LLMs or image generators be banned?

Are technologists creating AI swarms for political manipulation? Or is that being done by politicians or political groups?

Are you suggesting that an LLM or image generator is like a gun?


I'm not arguing for a ban, but I am trying to argue for a more nuanced view about technology and responsibility.

No, LLMs are not guns. That said, any responsible and conscious designer of LLMs should know they impose other risks, like the ones they pose regarding misinformation and democracy. They should also know that they pose risks regarding economic stability and copyright law and plagiarism.

You could try to weigh this against the benefits, but the fact of the matter is, the responsible way to develop these technologies is to show that you are explicitly accounting for these obvious dangers as well. Very few companies actually do that (because they selfishly choose to optimize for profit instead).

My main point is just that the user isn't the only person with some amount of responsibility when it comes to technology. Designing a tool does not therein automatically absolve you of any moral responsibility.

I think the reasonable approach is regulation—it is the only thing that helps combat sheer profit motive and force companies to attend to at least some of this responsibility.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: