Hacker News new | past | comments | ask | show | jobs | submit login

> but if it emerges, we should be able to see it coming.”

I'm a little bit confused about what the author's intent is here. They seem to be strawmaning quite a bit. I can guarantee that not all of the people with AI safety concerns are of the opinion that things are done safely just because you can observe them happening... For one, many would argue that we are currently observing it happening without appropriate response. For two, many are concerned that the response will not be appropriate once it is observed.

It is quite disappointing to see such a weak straw man coming out of a Stanford article. I guess it speaks to who maybe providing their funding.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: