Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Thanks for your reply, cool that there are others who have the same interpretation of the ongoing development. I said "it probably won't happen", I mostly meant that in a resigned way, where I think that humanity won't muster up any resistance and leave things to Sam Altman and OpenAI to decide. Sad as that is.

I also find it funny how the paperclip maximizer scenarios are at the forefront of the alignment people's thoughts, when even an aligned AI would reduce humanity to a useless pet of the AGI. I guess some can find such an existence pleasant, but it would be the end of humanity as a species with self-determination nonetheless.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: