> They went from "this represents roughly our median guess" in the website to "maybe think of it as an 80th percentile version of the fast scenario that we don't feel safe ruling out" in followup discussions.
His post also just reads like they think they're Hari Seldon (oh Daniel's modal prediction, whew, I was worried we were reading fanfic) while being horoscope-vague enough that almost any possible development will fit into the "predictions" in the post for the next decade. I really hope I don't have to keep reading references to this for the next decade.
> while being horoscope-vague enough that almost any possible development will fit into the "predictions" in the post for the next decade.
This is a recurring theme in rationalist blogs like Scott Alexander’s: They mix a lot of low-risk claims in with heavily hedged high-risk claims. The low risk claims (AI will continue to advance) inevitably come true and therefore the blog post looks mostly accurate in hindsight.
When reading the blog post in the current context the hedging goes mostly unnoticed because everyone clicked on the article for the main claim, not the hedging.
When reviewing blog posts from the past that didn’t age well, that hedging suddenly becomes the main thing their followers want you to see.
So in future discussions there are two outcomes: He’s always either right or “not entirely wrong”. Once you see it, it’s hard to unsee. Combine that with the almost parasocial relationship that some people develop with prominent figures in the rationalist sphere and there are a lot of echo chambers that, ironically, think they’re the only rational ones who see it like it really is.
> Hari Seldon is a fictional character in the Foundation series of novels by Isaac Asimov. In his capacity as mathematics professor at Streeling University on the planet Trantor, Seldon develops psychohistory, an algorithmic science that allows him to predict the future in probabilistic terms
There are a lot of people that believe most of us will die within the next 10 years, and a rational discussion of these subjects is largely based in the fact that for the last three generations, we have faced numerous existential threats that instead of solving them, have instead all had the can kicked down the road.
Eventually what inevitably happens is you get convergence in time where you simply do not have the resources, and with the risk factors today, that convergence my cause societal failure.
Super Intelligent AI alone, yeah that probably is not a threat because its so highly (astronomically) unlikely, but socio-economic collapse to starvation; now that's a very real possibility when you create something that destroys the ability for an individual to form capital, or breaks other underlying aspects which underpin all of societal organization going back hundreds of years.
Now these things won't happen overnight, but that's not the danger either. The danger is the hysteresis, or in other words by the time you find out and can objectively show its happening to react, its impossible to change the outcome. Your goose is just cooked as a species, and the cycle of doom just circles until no ones left.
Few realize that food today is wholly dependent on Haber-Bosch chemistry. You get 4x less yield without it, and following Catton in a post-extraction phase sustainable population numbers may be fractional compared to last century (when the population was 4bn). People break quite easily under certain circumstances and so any leaders following MAD doctrine will likely actually use it when they realize everything is failing and what's ahead.
These are just things that naturally happen when the mechanics of things that are long forgotten which underpin the way things work fail to ruin. The loss of objective reality is a warning sign of such things on the horizon.
> We do not mean that as hyperbole. We are not exaggerating for effect. We think that is the most direct extrapolation from the knowledge, evidence, and institutional conduct around artificial intelligence today. In this book, we lay out our case.
Take us seriously, buy our book!
We're real researchers, so we make our definitely scientific case available to anyone who will give us $15-$30! It's the most important book ever, says some actor. Read it, so we all don't die!
For Christ's sake, how does anyone take this Harry Potter fanfiction writer serially?
Because of what else he writes besides the fanfic. (Better question is why anyone takes JKR herself seriously).
But if you insist on only listening to people with academic acolades or industrial output, there's this other guy who got the Rumelhart Prize (2001), Turing Award (2018), Dickson Prize (2021), Princess of Asturias Award (2022), Nobel Prize in Physics (2024), VinFuture Prize (2024), Queen Elizabeth Prize for Engineering (2025), Order of Canada, Fellow of the Royal Society, and Fellow of the Royal Society of Canada
That's one person with all that, and he says there's a "10 to 20 per cent chance" that AI would be the cause of human extinction within the following three decades, and "it is hard to see how you can prevent the bad actors from using [AI] for bad things.": https://en.wikipedia.org/wiki/Geoffrey_Hinton
Myself, I'm closer to Hinton's view than Yudkowsky's: path dependency, i.e. I expect that before we get existential threat from AI, we get catastrophic economic threat the precludes existential threat.
I do say the same think about JKR, btw. And for the same things, because the content she writes. I think you focused on the fanfic part and not the part where I'm criticizing where they say their stuff is the most important to keep humanity alive and they're charging money for it. Meanwhile, you may notice in academia we publish papers to make them freely available, like on arXiv. If it is that important that people need to know, you make it available.
The second person, Hinton, is not as good of an authority as you'd expect. Though I do understand why people take him seriously. Fwiw, his Nobel was wildly controversial. Be careful, prizes often have political components. I have degrees in both CS and physics (and am an ML researcher) and both communities thought it was really weird. I'll let you guess which community found it insulting.
I want to remind you, in 2016 Hinton famously said[0]
| Let me start by saying a few things that seem obvious. I think if you work as a radiologist, you're like the coyote that's already over the edge of the cliff, but hasn't yet looked down so doesn't realize there's no ground beneath him. People should stop training radiologists now. It's just completely obvious that within 5 years that deep learning is going to be better than radiologists because it's going to get a lot more experience. It might take 10 years, but we've got plenty of radiologists already.
We're 10 years in now. Hinton has shown he's about as good at making predictions as Musk. What Hinton thought was laughably obvious actually didn't happen. He's made a number of such predictions. I'll link another small explanation from him[1] because it is so similar to something Sutskever said[2]. I can tell you with high certainty that every physicist laughs at such a claim. We've long experienced that being able to predict data does not equate to understanding that data[3].
I care very much about alignment myself[4,5,6,7]. The reason I push back on Yud and others making claims like they do is because they are actually helping create the future they say we should be afraid of. I'm not saying they're evil or directly making evil superintelligences. Rather, they're pulling attention and funds away from the problems that need to be solved. They are guessing about things we don't need to guess about. They are making confidently asserting claims we know to be false (to be able to make accurate predictions requires accurate understanding[8]). Without being able to openly and honestly speak to the limitations of our machines (mainly blinded by excitement), we create these exact dangers we worry about. I'm not calling for a pause on research, I'm calling for more research and more people to pay attention to the subtle nature of everything. In a way I am saying "slow down" but only in that I'm saying don't ignore the small stuff. We move so fast that we keep pushing off the small stuff, but the AI risk comes through the accumulation of debt. You need to be very careful to not let that debt get out of control. You don't create safe systems by following the boom and bust hype cycles that CS is so famous for. You don't just wildly race to build a nuclear reactor and try to sell it to people while it is still a prototype.
[8] This is the link to [1,2]. I mean you can reasonably create a data generating process that is difficult or impossible to distinguish from the actual data generating process but you have a completely different causal structure, confounding variables, and all that fun stuff. Any physicist will tell you that fitting the data is the easy part (it isn't easy). Interpreting and explaining the data is the hard part. That hard part is the building of the causal relationships. It is the "understanding" Hinton and Sutskever claim.
His post also just reads like they think they're Hari Seldon (oh Daniel's modal prediction, whew, I was worried we were reading fanfic) while being horoscope-vague enough that almost any possible development will fit into the "predictions" in the post for the next decade. I really hope I don't have to keep reading references to this for the next decade.