Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For those who aren't aware, this blog is infamous for sloppy work. https://www.lesswrong.com/posts/7iAABhWpcGeP5e6SB/it-s-proba...


I follow Natália on twitter, and she's repeatedly tried to get them to address her criticisms, to no avail as far as I've seen. It's not a good look for a supposedly scientific blog, especially given that her criticisms are detailed and data-based.

I don't think we should dismiss things based purely on the source, but caution seems warranted (as always, I suppose.)


This is a submission to a [contest](https://slimemoldtimemold.com/2023/01/01/mysterious-mysterie...) SMTM are running, so the author is very likely not the normal authors of SMTM.


I still think it's important to note that the blog's work is shoddy. The fact that this blog post is #1 on HN means that multiple people will read SMTM's other blog posts, which contain several falsehoods that they've refused to fix.

See, for example, this page https://manifold.markets/Natalia/how-many-of-these-falsemisl... for an incomplete list. I have been trying to get them to remove falsehoods from their blog for over one year, but it's impossible.


>The fact that this blog post is #1 on HN means that multiple people will read SMTM's other blog posts, which contain several falsehoods that they've refused to fix.

That was prescient: https://news.ycombinator.com/item?id=36854850


Thanks, didn't know. This article seems to have been written by a reader though so not the same authors of the sketchy lithium work


[flagged]


LessWrong is obviously not a cult, the allegations are nonsense. Well established researchers post there (e.g. Paul Christiano) and some even work or worked for OpenAI or DeepMind (e.g. Richard Ngo).

It's rather that the cited source "RationalWiki" is a highly politically biased source which routinely attacks anything which is in any way associated with the rationalism community, or which engages with things that are deemed unacceptable by its far-left authors (such as intelligence research). They have in the past attacked highly respected bloggers such as Scott Aaronson and Scott Alexander.


[flagged]


Well, Aaronson is also a leading quantum computing researcher, and Alexander is read by respected scholars, such as the psychologist and linguist Steven Pinker, or the economist Bryan Caplan.


So? That means nothing as to the quality or even topic of their blog content.

Unless blogs started receiving peer review while I wasn't paying attention, there's likely a reason any given content lives there and not in an academic paper.


Well, yeah, blogs are not academic journals. And if a blog post disagrees with academic consensus, the academic consensus will be right the vast majority of the time. But this applies to all blogs on the internet, not only those on LessWrong.


The issue is that LW, more than most, is seemingly unaware of this fact.



Wrong submission about attention.


"Quantum Computing" is a cult with commercial interests / investor scams.

I'm even more cynical about the economists.


Scott Aaronson has often called out inflated claims by quantum computer startups.


He's been more of a quantum fanboy of late. He does still talk about obvious fraud, but he's largely uncritical of the bigger players' offerings. Likewise, he's on openAI's payroll; no hard-hitting critique to be found. His Busy Beaver content is top-notch, though.

But when he shares his opinions on cancel culture, gender, etc.? He's just another blogger.


It's not just bloggers. There are computer scientists with publications in top CS conferences that post on LW. For example, Alexander Turner, Dan Hendrycks, Paul Christiano, and Jacob Steinhardt all post there.


Then the question you should be asking is why choose to post on LW when the value of publishing a paper is substantially higher.

Perhaps because what they post doesn’t hold up to scrutiny from their peers, so they prefer to play in the mud with the folks who won’t notice the issues with their ideas.


People publish blog posts for several reasons. Many see it as a fun, social activity.


This excerpt from the article on Scott Alexander on RationalWiki is the only thing I need to quote to make others understand the stakes here.

>As usual, you can make anything worse by adding Reddit. /r/slatestarcodex is an unofficial fan forum for the blog. Scott comments occasionally and is a moderator. The culture wars (a regular weekly thread, until it was recently branched off to the Scott-endorsed /r/themotte) and pseudoscientific racialism of "human biodiversity" are regular and upvoted topics (literally advocating the Fourteen Words will get you 40+ upvotes[45] and admiring replies). Of course, much more offensive than the racism is objecting to the racism, which gets you a day's ban.[46] According to one moderator, "A belief in HBD doesn’t automatically equate to racism", somehow.

I also invite people to visit citations 45 and 46. The corresponding claims in the article might technically be true, but I think they're mislealding to say the least.


I hope you don’t think invoking Reddit raises the stakes…


I'm not trying to raise the stakes. I'm trying to point out the level of maturity at rationalwiki.


That's my point; everyone here is shitty, none of it matters, and smart people can ignore both places entirely and be better off for it.


In the tread:

> > Primarily he focuses not on developing a Strong AI (AGI), but rather focusing on safety issues that such a technology would pose.

> That's absurd at worst, science fiction at best, akin to worrying about manned flight safety in the 1500's.

It sure seems more warranted now.


"Safety issues" are to him more like, "the AI will kill everyone to fill its objectives" and less like, "deepfakes are concerning."

Though I'm sure he hopped on the "deepfakes are bad" bandwagon at some point to further his time in the spotlight.


> "Safety issues" are to him more like, "the AI will kill everyone to fill its objectives"

Yes, and he was way ahead of the curve here, since similar positions got a lot more mainstream in the past years and months. E.g. two of three AI Turing award winners (Hinton and Bengio) now say that superintelligent AI poses a serious extinction risk. OpenAI has also identified the alignment problem as a major issue. Even former AI skeptics like Douglas Hofstadter now broadly agree with this assessment. Yudkowsky's opinion was quite prescient.


None of those people you mentioned share Yudkowsky’s conclusion that air strikes against data centers are warranted.


Do you have a citation for that?

I know how to find the statements by Hinton and Benigo that superintelligent AI poses a serious extinction risk, but I can't find any statements by Hinton and Benigo--or anyone who is taking the risk seriously--suggesting that air strikes would not be warranted, so any clues on where to look would be appreciated.


So in other words, people who've spent their entire lives thinking and talking about AI praise other people who also have spent their entire lives thinking and talking about AI.

This is a bubble conversation that makes no sense to people outside of that bubble, and for good reason; it doesn't matter outside of that bubble.


No, AI experts which formerly were skeptical of an outsider's (Yudkowsky's) opinion, now changed their mind about it.


Wait, so you deny the fact that Hinton and Bengio have worked on AI their entire careers?


No, I don't, why do you think that?


Because you said “no”, and I only said one thing. Do you take that back?


That's so embarrassingly obtuse I refuse to believe you aren't aware and are just being contrarian.


Not obtuse at all, just catching rhetorical trickery.


I was disagreeing with it being about experts "praising" each other.


No you weren't, you were choosing to focus on a different category to which these people belonged in an ultimately non sequitur comment.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: