Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Rationalism does not claim that any entity should maximize paperclips, only that such an ethical norm could exist. And if something vastly more powerful than humans has that ethical norm, things will end very badly for us.


I do not think so. If a true AGI were to select its own version of meaning (paperclips or marbles) would it not select something along the lines of “more knowledge of the universe in which I find myself”? It is presumably going to have superintelligence, so let’s give it/them a better and more plausible meaning; something other than paperclips, marbles, or von Neumann machines.


"Knowledge of the universe" is just as dangerous a terminal goal as "maximize paperclips". To paraphrase Yudkowsky, you are made from atoms, which could be used to build the super-ultra-large particle collider.


No, I disagree. More knowledge equals more diverse dynamic structures and interactions—more “good” entropy. That covaries with higher diversity, not more paperclips.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: