Why would Roko's basilisk play a big part in your reasoning?
In my experience, it's basically never been a part of serious discussions in EA/LW/AI Safety. Mostly, comes up when people are joking around or when speaking to critics who raise it themselves.
Even in the original post, the possibility of this argument was actually more of a sidenote on the way to main point (admittedly, he's main point involved an equally wacky thought experiment!).
I didn't intend to portray it as a large part of my reasoning. It's not really any part of my reasoning at all except to illustrate that the sort of absurd argumentation that lead to the regulations Ng is criticizing[^1]. These lines of reasoning their proponents basically _begin_ with an all-mighty AI and derive harms, then step back and debate/design methods for preventing the all-mighty AI. From a strict utilitarian framework this works because infinite harm times non-zero probability is still infinite. From a practical standpoint this is a waste of time, and like Ng argues, is likely to stifle innovations with the a far greater chance to benefit society than cause AI-doomsday.
The absurdity of this line of reasoning also supports the cynical interpretation that this is all just moat building, with the true believers propped up as useful idiots. I'm no Gary Marcus, but prepping for AGI doomsday seems like a bit premature.
>In my experience, it's basically never been a part of serious discussions in EA/LW/AI Safety. Mostly, comes up when people are joking around or when speaking to critics who raise it themselves.
>Even in the original post, the possibility of this argument was actually more of a sidenote on the way to main point (admittedly, he's main point involved an equally wacky thought experiment!).
This is fair, it was a cheap shot. While I will note that EY seems to take the possibility seriously, I admittedly have no idea how seriously people take EY these days. But, for some reason 80,000 hours lists AI as the #1 threat to humanity, so it reads to me more like flat earthers vs geocentrists.
[^1]: As in, while I understand that Roko is sincerely shitposting about something else, and merely coming across the repugnant conclusion that an AGI could be motivated to accelerate its own development by retroactive punishment, the absurd part is in concluding that AGI is a credible threat. Everything else just adds to that absurdity.
In my experience, it's basically never been a part of serious discussions in EA/LW/AI Safety. Mostly, comes up when people are joking around or when speaking to critics who raise it themselves.
Even in the original post, the possibility of this argument was actually more of a sidenote on the way to main point (admittedly, he's main point involved an equally wacky thought experiment!).