Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Have you read Eliezer Yudkowsky and the LessWrong forum on AI existential risk? Your understanding of the sheer magnitude of future AI and taking it seriously as a critical risk to humanity are common qualities shared with them. (Their focus to address this is to figure out if it's possible for AI to be built aligned with human values, so that way it cares about helping us instead of letting us get killed.)

(The Fermi paradox is also the kind of thing discussed on LessWrong.)



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: