Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>Arguments that presuppose a god-like super intelligence are not useful. Sure, if we create a system that is all powerful I’d agree that it can destroy humanity. But specifically how are we going to build this?

Yeah, exactly. THIS is the type of x-risk talk that I find cringe-y and ego-centered.

There are real risks. I've devoted my career to understanding and mitigating them, both big and small. There's also risk in risk mitigation, again, both big and small. Welcome to the world of actual engineering :)

But a super-human god intelligence capable of destroying us all by virtue of that fact that it has an IQ over 9000? I have a book-length criticism of the entire premise and whether we should even waste breath talking about it before we even start the analysis of whether there is even the remotest scrap of evidence that were are anywhere near such a thing being possible.

It's sci-fi. Which is fine. Just don't confuse compelling fiction with sound policy, science, or engineering.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: