Is there a way that you'd recommend somebody outside the field assess your "90%" claim? Elsewhere in the thread you're dismissive of that one survey - which I agree is weak evidence in itself - and you also deny that the statements of leaders at OpenAI, DeepMind and Anthropic are representative of researchers there, which again may be the case. But how is someone who doesn't know you or your network supposed to assess this?
Relatedly, it'd be helpful if you could point to folks with good ML research credentials making a detailed case against x-risk. The prominent example of whom I'm aware is Yann LeCun, and what I've seen of his arguments take place more in the evopsych field (the stuff about an alleged innate human drive for dominance which AIs will lack) than the field in which he's actually qualified.
TBH, I don't find LeCun particularly convincing. I think he's correct but not persuasive, if that makes sense.
Debunking AI x-risk is a weird thing to spend time on. There's really no up-side, and there aren't a bunch of rich people paying for Institutes and Foundations on Non-Breathtaking-Very-Boring-Safety-Research. Also, most of the arguments in favor of x-risk that lawmakers and laypeople find most convincing are unfalsifiable, so it's a bit like arguing against religion in that respect.
I don't think there are any public intellectuals in this space doing it well. I'm not sure what to make of that. For myself, I make the argument for focusing on concrete safety problems from within the agencies and companies that are allocating resources. I'm not gifted TED talker.
There's definitely an audience for debunking x-risk, but probably not one that attracts institutional funding. Being a tech critic in the mode of Timnit Gebru or Emily Bender will get you media attention, but I doubt it's lucrative. (And empirically the media ecosystem doesn't appear to incentivize being fair-minded enough to persuade a fence-sitter, either.)
(Not the OP) In this article Le Cun argues in very concrete technical terms about the impossibility of achieving AGI with modern techniques (yes, all of them):
Meta's AI guru LeCun: Most of today's AI approaches will never lead to true intelligence
Edit: on second thought, he gets maybe a bit too technical at times, but I think it should be possible to follow most of the article without specialised knowledge.
I wouldn't describe most of what's in that interview as "very concrete technical" terms - at least when it comes to other people's research programs. More importantly, while it's perfectly reasonable for LeCun to believe in his own research program and not others, "this one lab's plan is the one true general-AI research program and most researchers are pursuing dead ends" doesn't seem like a very sturdy foundation on which to place "nothing to worry about here" - especially since LeCun doesn't give an argument here why his program would produce something safe.
You can ignore that, of course he'll push his research. But he never says that what he does will lead to AGI. He's proposing a way forward to overcome some specific limitations he discusses.
Otherwise, he makes some perhaps subtle points about learning hidden variable models that are relevant to modern discussions about necessarily learning world-models in order to best model text.
> But how is someone who doesn't know you or your network supposed to assess this?
IDK. And FWIW I'm not even sure that the leaders of those organizations all agree on the type and severity of risks, or the actions that should be taken.
You could take the survey approach. I think a good survey would need to at least have cross tabs for experience level, experience type, and whether the person directly works on safety with sub-samples for both industry and academia, and perhaps again for specific industries.
Also, the survey needs to be more specific. What does 5% mean? Why 2035 instead of 2055? Those questions invite wild ass guessing, with the amount of consideration ranging from "sure seems reasonable" to "I spend weeks thinking about the roadmap from here to there". And self-identified confidence intervals aren't enough, because those might also be wild ass guesses.
If I answered these questions, I would give massive intervals that basically mean "IDK and if I'm honest I don't know how others think they have informed opinions on half these questions". I suspect a lot of the respondents felt that way, but because of the design, we have no way of knowing.
Instead of asking for a timeframe or percent, which is fraught, ask about opinions on specific actionable policies. Or at least invite an opportunity to say "I am just guessing, haven't thought much about this, and [do / do not] believe drastic action is a good idea"
Someone has been working on survey forecasts, since 2016, https://aiimpacts.org/. As fraught or self-selected as the group might be, someone labors on it.
Unfortunately, fortunately, expectedly, or otherwise, the only people writing about this in a concerted way are the people taking it seriously. And maybe Gary Marcus, whose negative predictions repeatedly became milestones surpassed.
I think the 5% thing is at least meaningfully different from zero or "vanishingly small", so there's something to the fact that people are putting the outcome on the table, in a way that eg I don't think any significant number of physicists ever did about "LHC will destroy the world" type fears. I agree it's not meaningfully different from 10% or 2% and you don't want to be multiplying it by something and leaning on the resulting magnitude for any important decisions.
Anyway I expect that given all the public attention recently more surveys will come, with different methodologies. Looking forward to the results! (Especially if they're reassuring.)