I follow Natália on twitter, and she's repeatedly tried to get them to address her criticisms, to no avail as far as I've seen. It's not a good look for a supposedly scientific blog, especially given that her criticisms are detailed and data-based.
I don't think we should dismiss things based purely on the source, but caution seems warranted (as always, I suppose.)
I still think it's important to note that the blog's work is shoddy. The fact that this blog post is #1 on HN means that multiple people will read SMTM's other blog posts, which contain several falsehoods that they've refused to fix.
>The fact that this blog post is #1 on HN means that multiple people will read SMTM's other blog posts, which contain several falsehoods that they've refused to fix.
LessWrong is obviously not a cult, the allegations are nonsense. Well established researchers post there (e.g. Paul Christiano) and some even work or worked for OpenAI or DeepMind (e.g. Richard Ngo).
It's rather that the cited source "RationalWiki" is a highly politically biased source which routinely attacks anything which is in any way associated with the rationalism community, or which engages with things that are deemed unacceptable by its far-left authors (such as intelligence research). They have in the past attacked highly respected bloggers such as Scott Aaronson and Scott Alexander.
Well, Aaronson is also a leading quantum computing researcher, and Alexander is read by respected scholars, such as the psychologist and linguist Steven Pinker, or the economist Bryan Caplan.
So? That means nothing as to the quality or even topic of their blog content.
Unless blogs started receiving peer review while I wasn't paying attention, there's likely a reason any given content lives there and not in an academic paper.
Well, yeah, blogs are not academic journals. And if a blog post disagrees with academic consensus, the academic consensus will be right the vast majority of the time. But this applies to all blogs on the internet, not only those on LessWrong.
He's been more of a quantum fanboy of late. He does still talk about obvious fraud, but he's largely uncritical of the bigger players' offerings. Likewise, he's on openAI's payroll; no hard-hitting critique to be found. His Busy Beaver content is top-notch, though.
But when he shares his opinions on cancel culture, gender, etc.? He's just another blogger.
It's not just bloggers. There are computer scientists with publications in top CS conferences that post on LW. For example, Alexander Turner, Dan Hendrycks, Paul Christiano, and Jacob Steinhardt all post there.
Then the question you should be asking is why choose to post on LW when the value of publishing a paper is substantially higher.
Perhaps because what they post doesn’t hold up to scrutiny from their peers, so they prefer to play in the mud with the folks who won’t notice the issues with their ideas.
This excerpt from the article on Scott Alexander on RationalWiki is the only thing I need to quote to make others understand the stakes here.
>As usual, you can make anything worse by adding Reddit. /r/slatestarcodex is an unofficial fan forum for the blog. Scott comments occasionally and is a moderator. The culture wars (a regular weekly thread, until it was recently branched off to the Scott-endorsed /r/themotte) and pseudoscientific racialism of "human biodiversity" are regular and upvoted topics (literally advocating the Fourteen Words will get you 40+ upvotes[45] and admiring replies). Of course, much more offensive than the racism is objecting to the racism, which gets you a day's ban.[46] According to one moderator, "A belief in HBD doesn’t automatically equate to racism", somehow.
I also invite people to visit citations 45 and 46. The corresponding claims in the article might technically be true, but I think they're mislealding to say the least.
> "Safety issues" are to him more like, "the AI will kill everyone to fill its objectives"
Yes, and he was way ahead of the curve here, since similar positions got a lot more mainstream in the past years and months. E.g. two of three AI Turing award winners (Hinton and Bengio) now say that superintelligent AI poses a serious extinction risk. OpenAI has also identified the alignment problem as a major issue. Even former AI skeptics like Douglas Hofstadter now broadly agree with this assessment. Yudkowsky's opinion was quite prescient.
I know how to find the statements by Hinton and Benigo that superintelligent AI poses a serious extinction risk, but I can't find any statements by Hinton and Benigo--or anyone who is taking the risk seriously--suggesting that air strikes would not be warranted, so any clues on where to look would be appreciated.
So in other words, people who've spent their entire lives thinking and talking about AI praise other people who also have spent their entire lives thinking and talking about AI.
This is a bubble conversation that makes no sense to people outside of that bubble, and for good reason; it doesn't matter outside of that bubble.