Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This entire debate was a shambles, IMO. Neither side really seemed to be making good points. Almost the entire time they were arguing totally different points. One side was insisting AI would be a severe risk, the other side was that it wasn't an existential risk. In the end it was a pointless circular argument where each side retreated to their vague interpretation of the question.

What scares me most is that it seems we are wholly unprepared as a species to have this debate. Our technology keeps increasing in power and ease of use, but our ability to understand even the basic difference between "existential risk" and "severe risk" is lacking. And further, it seems that amongst those who are pushing this kind of technology (accelerationists) there is a subtle undertone that some casualties are expected and acceptable during this transformation. Even if it does kill most humans, the world that is left for the rest will be so much better that maybe it is worth it. Few come right out and say it, but that is what it seems they are implying.



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: