Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A lot of this discussion reminds me of the book Blindsight.

Something doesn't have to be conscious or intelligent to harm us. Simulating those things effectively can be almost indistinguishable from a conscious being trying to harm us.



I never asserted that they couldn't do harm. I asserted that they don't think, and therefore cannot intend to do harm. They have no intentions whatsoever.


What does it matter if there was intention or not as long as harm was done?


If a person causes harm, we care a lot. We make the distinction between manslaughter, first and second degree murder, as well as adding hate crimes penalties on top if the victim was chosen for a specific set of recognized reasons. ML models aren't AGI, so it's not clear how we'd apply it, but there's precedent for it mattering.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: