Hacker News new | past | comments | ask | show | jobs | submit login

I feel like Andrew Ng has more name recognition than Google Brain itself.

Also Business Insider isn't great, the original Australian Financial Review article has a lot more substance: https://archive.ph/yidIa

I've never been convinced by the arguments of OpenAI/Anthropic and the like on the existential risks of AI. Maybe I'm jaded by the ridiculousness of "thought experiments" like Roko's basilisk and lines of reasoning followed EA adherents, where the risks are comically infinite and alignment feels a lot more like hermeneutics.

I am probably just a bit less cynical than Ng is here on the motivations[^1]. But regardless of whether or not the AGI doomsday claim is justification for a moat, Ng is right in that it's taking a lot the oxygen out of the room for more concrete discussion on the legitimate harms of generative AI -- like silently proliferating social biases present in the training data, or making accountability a legal and social nightmare.

[^1]: I don't doubt, for instance, that there's in part some legitimate paranoia -- Sam Altman is a known doomsday prepper.




> Ng is right in that it's taking a lot the oxygen out of the room for more concrete discussion on the legitimate harms of generative AI -- like silently proliferating social biases present in the training data, or making accountability a legal and social nightmare.

And this is the important bit. All these people like Altman and Musk who go on rambling about the existential risk of AI distracts from the real AI harm discussions we should be having, and thereby directly harms people.


I'm always unsure what people like you actually believe regarding existential AI risk.

Do you think it's just impossible to make something intelligent that runs in a computer? That intelligence will automatically mean it will share our values? That it's not possible to get anything smarter than a smart human?

Or do you simply believe that's a very long way away (centuries) and there's no point in thinking about it yet?


I don’t see how we could make some artificial intelligence that, like in some Hollywood movie, can create robots with arms and kill all of humanity. There’s a physical component to it. How would it create factories to build all this?


Why would Roko's basilisk play a big part in your reasoning?

In my experience, it's basically never been a part of serious discussions in EA/LW/AI Safety. Mostly, comes up when people are joking around or when speaking to critics who raise it themselves.

Even in the original post, the possibility of this argument was actually more of a sidenote on the way to main point (admittedly, he's main point involved an equally wacky thought experiment!).


I didn't intend to portray it as a large part of my reasoning. It's not really any part of my reasoning at all except to illustrate that the sort of absurd argumentation that lead to the regulations Ng is criticizing[^1]. These lines of reasoning their proponents basically _begin_ with an all-mighty AI and derive harms, then step back and debate/design methods for preventing the all-mighty AI. From a strict utilitarian framework this works because infinite harm times non-zero probability is still infinite. From a practical standpoint this is a waste of time, and like Ng argues, is likely to stifle innovations with the a far greater chance to benefit society than cause AI-doomsday.

The absurdity of this line of reasoning also supports the cynical interpretation that this is all just moat building, with the true believers propped up as useful idiots. I'm no Gary Marcus, but prepping for AGI doomsday seems like a bit premature.

>In my experience, it's basically never been a part of serious discussions in EA/LW/AI Safety. Mostly, comes up when people are joking around or when speaking to critics who raise it themselves.

>Even in the original post, the possibility of this argument was actually more of a sidenote on the way to main point (admittedly, he's main point involved an equally wacky thought experiment!).

This is fair, it was a cheap shot. While I will note that EY seems to take the possibility seriously, I admittedly have no idea how seriously people take EY these days. But, for some reason 80,000 hours lists AI as the #1 threat to humanity, so it reads to me more like flat earthers vs geocentrists.

[^1]: As in, while I understand that Roko is sincerely shitposting about something else, and merely coming across the repugnant conclusion that an AGI could be motivated to accelerate its own development by retroactive punishment, the absurd part is in concluding that AGI is a credible threat. Everything else just adds to that absurdity.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: