> Which I guess raises a larger point about how the rationalist and EA and other similar communities have a tendency to try to reduce fiendishly complex multivariable human issues to equations and statistics without realizing that some problems are not math problems and will defy any attempt to be quantified, statistitized, or calculated.
Some of my old coworkers were really into EA, LessWrong, and the rationalist community.
At first it was fun to read some of the articles they shared, but over time I observed the same pattern you described: Much of the rationalist writing felt like it started with a conclusion and then worked backwards to find some numbers or logic that supported the conclusion. They had already made up their minds, after which the rationalist blogging was all about publicly rationalizing it.
The other trend I noticed was how much they liked to discard other people's research when it didn't agree with something they wanted to write. They were very good at finding some obscure source that had a different number or conclusion that supported their claims, which was then held up as an unquestioned source with no further scrutiny.
Someone once described it to me as "second option bias": Rationalist writings have a theme of taking a commonly held belief and then proposing a slightly contrarian take that the closest alternative explanation is actually the correct one.
Once you start seeing it, the pattern shows up everywhere in rationalist writings. In this article for example:
1. Argument that kidney donation is actually much safer than people think because the author picks 1 or 2 specific failure modes and cites those as if they captured all possible downsides.
2. Argument that CT scans are actually much more deadly than people think because there are many possible downsides that we can't account for. Author admits to surveying a contentious field and selecting the most conservative risk estimate he found.
3. Argument that the Center for EA buying their own expensive castle to host their own meetings is good, actually. Anyone who questions the Center for EA spending millions on a remote castle so they could meet there must be wrong and misinformed and outsiders, with little more than "just trust them that the math is right" as the argument.
This author is a very good writer so he's masterful at breezing past the details, but it's hard to miss the pattern once you start seeing it. The pattern is more obvious in a lot of the less popular rationalist writings where it's clear the authors discarded any sources that were inconvenient to their conclusion but elevated any sources that told them what they wanted to hear.
Another common pattern in rationalist writing is to make strong claims based on weak evidence, then to hedge in the footnotes as a way to preemptively defuse counterarguments. Sure enough, this article has some footnotes that acknowledge that the radiation risk numbers he used in the main article are actually highly disputed and he chose the most conservative ones. This point is conveniently separated from the main article to avoid detracting from the point he wanted to make. The main article confidently makes one claim, then anything that might weaken that claim is hidden in a footnote.
Predictably, many of the comments on HN questioning the radiation numbers are met with "he addressed that in the footnote" comments that try to shut down the debate, so the strategy clearly works. Something about hedging in footnotes inoculates certain readers against questioning the main article. It's another pattern that becomes obvious once you start seeing it.
Thank you for taking the time to write this take up. I couldn't quite put my finger on what I found off-putting about EA but the second option bias and hedging with footnotes, in retrospect, are what have made me uneasy about EA (and rationalist writing in general) since Yudkowski.
At the same time, don't you risk punishing him for dissecting and revealing his thought process in public, however flawed it may be? I don't think you want to discourage that, do you?
The whole article is an exercise in motivated reasoning, and he comes right out and says as much.
Revealing the thought process is part of the persuasive technique.
It’s another rationalist writing theme: By taking the reader through a meandering path to the conclusion, some times with twists and turns and backtracks, you give the impression that you’ve covered every possible angle already. The conclusion feels unarguable because the author has walked you from beginning to end with clear logical steps.
The hidden problem is in what has been omitted. We’re supposed to assume the author included all relevant information and presented it faithfully. What really happens is that writers tend to downplay evidence that disagrees with their opinions. If they can’t avoid it, they include it with a suggestion that they tried to consider it but it they imply that it wasn’t reasonable.
That’s where the footnotes come in: By acknowledging it in the footnotes they can signal that they are aware of it but relegated it to the footnotes as a defensive measure.
Revealing the thought process gives misleading confidence in the conclusion if you’re assuming that the thought process isn’t prone to the same opinions and biases as the conclusion.
If it was possible to discourage him from revealing his thought process in public wouldn't that have happened already? He's not been exactly shy about controversial takes on various topics.
Some of my old coworkers were really into EA, LessWrong, and the rationalist community.
At first it was fun to read some of the articles they shared, but over time I observed the same pattern you described: Much of the rationalist writing felt like it started with a conclusion and then worked backwards to find some numbers or logic that supported the conclusion. They had already made up their minds, after which the rationalist blogging was all about publicly rationalizing it.
The other trend I noticed was how much they liked to discard other people's research when it didn't agree with something they wanted to write. They were very good at finding some obscure source that had a different number or conclusion that supported their claims, which was then held up as an unquestioned source with no further scrutiny.
Someone once described it to me as "second option bias": Rationalist writings have a theme of taking a commonly held belief and then proposing a slightly contrarian take that the closest alternative explanation is actually the correct one.
Once you start seeing it, the pattern shows up everywhere in rationalist writings. In this article for example:
1. Argument that kidney donation is actually much safer than people think because the author picks 1 or 2 specific failure modes and cites those as if they captured all possible downsides.
2. Argument that CT scans are actually much more deadly than people think because there are many possible downsides that we can't account for. Author admits to surveying a contentious field and selecting the most conservative risk estimate he found.
3. Argument that the Center for EA buying their own expensive castle to host their own meetings is good, actually. Anyone who questions the Center for EA spending millions on a remote castle so they could meet there must be wrong and misinformed and outsiders, with little more than "just trust them that the math is right" as the argument.
This author is a very good writer so he's masterful at breezing past the details, but it's hard to miss the pattern once you start seeing it. The pattern is more obvious in a lot of the less popular rationalist writings where it's clear the authors discarded any sources that were inconvenient to their conclusion but elevated any sources that told them what they wanted to hear.
Another common pattern in rationalist writing is to make strong claims based on weak evidence, then to hedge in the footnotes as a way to preemptively defuse counterarguments. Sure enough, this article has some footnotes that acknowledge that the radiation risk numbers he used in the main article are actually highly disputed and he chose the most conservative ones. This point is conveniently separated from the main article to avoid detracting from the point he wanted to make. The main article confidently makes one claim, then anything that might weaken that claim is hidden in a footnote.
Predictably, many of the comments on HN questioning the radiation numbers are met with "he addressed that in the footnote" comments that try to shut down the debate, so the strategy clearly works. Something about hedging in footnotes inoculates certain readers against questioning the main article. It's another pattern that becomes obvious once you start seeing it.