Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Life is a game of reward optimizations, and much of what humanity does seems to be attempting to "game" the rules; get ahead at all costs, ignore the affect on the long-term survival of the species. The clippy thought experiment asks the question: "Can we change our own reward?"

The actions you point out (environmental destruction, short-sighted carbon policies) optimize very clearly for an individualistic reward; it's in the best economical interest of a few to chop down forest / continue to burn coal / take your pick. And just as clippy can't seem to stop building paperclips while the world burns, we can't seem to stop optimizing for money/sex/domination while our future darkens.

But if you shift the reward function -- and it's not that huge of a shift -- from "win at life" to "long term survival of the species as a whole", these actions are very clearly unproductive.

So how do we change the reward score calculation?



No, the Clippy thought experiment asks the question: "might we accidentally build an AI that kills us all?"

It's not a very complicated thought experiment, you know. Not hard to interpret.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: