Hacker Newsnew | past | comments | ask | show | jobs | submit | FabHK's commentslogin

John Ioannidis is a weird case. His work on the replication crisis across many domains was seminal and important. His contrarian, even conspiratorial take on COVID-19 not so much.

He made a famous career, to being a professor and a director in Stanford University, about meta-research on the quality of other people's research, and critiquing the methodology of other people's studies. Then during Covid he tried to do a bit of original empirical research of his own, and his own methods and statistical data analysis were even worse than what he has critiqued in other people's work.

Ugh, wow, somehow I missed all this. I guess he joins the ranks of the scientists who made important contributions and then leveraged that recognition into a platform for unhinged diatribes.

Please don't lazily conclude that he's gone crazy because it doesn't align with your prior beliefs. His work on Covid was just as rigorous as anything else he's done, but it's been unfairly villainized by the political left in the USA. If you disagree with his conclusions on a topic, you'd do well to have better reasoning than "the experts said the opposite".

Ioannidis' work during Covid raised him in my esteem. It's rare to see someone in academics who is willing to set their own reputation on fire in search of truth.


What’s happening here?

“Most Published Research Findings Are False” —> “Most Published COVID-19 Research Findings Are False” -> “Uh oh, I did a wrongthink, let’s backtrack at bit”.

Is that it?


Yes, sort of. Ioannidis published a serosurvey during COVID that computed a lower fatality rate than the prior estimates. Serosurveys are a better way to compute this value because they capture a lot of cases which were so mild people didn't know they were infected, or thought it wasn't COVID. The public health establishment wanted to use an IFR as high as possible e.g. the ridiculous Verity et al estimates from Jan 2020 of a 1% IFR were still in use more than a year later despite there being almost no data in Jan 2020, because high IFR = COVID is more important = more power for public health.

If IFR is low then a lot of the assumptions that justified lockdowns are invalidated (the models and assumptions were wrong anyway for other reasons, but IFR is just another). So Ioannidis was a bit of a class traitor in that regard and got hammered a lot.

The claim he's a conspiracy theorist isn't supported, it's just the usual ad hominem nonsense (not that there's anything wrong with pointing out genuine conspiracies against the public! That's usually called journalism!). Wikipedia gives four citations for this claim and none of them show him proposing a conspiracy, just arguing that when used properly data showed COVID was less serious than others were claiming. One of the citations is actually of an article written by Ioannidis himself. So Wikipedia is corrupt as per usual. Grokipedia's article is significantly less biased and more accurate.


He published a serosurvey that claimed to have found a signal in a positivity rate that was within the 95% CI of the false-positive rate of the test (and thus indistinguishable from zero to within the usual p < 5%). He wasn't necessarily wrong in all his conclusions, but neither were the other researchers that he rightly criticized for their own statistical gymnastics earlier.

https://statmodeling.stat.columbia.edu/2020/04/19/fatal-flaw...

That said, I'd put both his serosurvey and the conduct he criticized in "Most Published Research Findings Are False" in a different category from the management science paper discussed here. Those seem mostly explainable by good-faith wishful thinking and motivated reasoning to me, while that paper seems hard to explain except as a knowing fraud.


> He wasn't necessarily wrong in all his conclusions, but neither were the other researchers that he rightly criticized for their own statistical gymnastics earlier.

In hindsight, I can't see any plausible argument for an IFR actually anywhere near 1%. So how were the other researchers "not necessarily wrong"? Perhaps their results were justified by the evidence available at the time, but that still doesn't validate the conclusion.


I mean that in the context of "Most Published Research Findings Are False", he criticized work (unrelated to COVID, since that didn't exist yet) that used incorrect statistical methods even if its final conclusions happened to be correct. He was right to do so, just as Gelman was right to criticize his serosurvey--it's nice when you get the right answer by luck, but that doesn't help you or anyone else get the right answer next time.

It's also hard to determine whether that serosurvey (or any other study) got the right answer. The IFR is typically observed to decrease over the course of a pandemic. For example, the IFR for COVID is much lower now than in 2020 even among unvaccinated patients, since they almost certainly acquired natural immunity in prior infections. So high-quality later surveys showing lower IFR don't say much about the IFR back in 2020.


Yeah I remember reading that article at the time. Agree they're in different categories. I think Gellman's summary wasn't really supportable. It's far too harsh - he's demanding an apology because the data set used for measuring test accuracy wasn't large enough to rule out the possibility that there were no COVID cases in the entire sample, and he doesn't personally think some explanations were clear enough. But this argument relies heavily on a worst case assumption about the FP rate of the test, one which is ruled out by prior evidence (we know there were indeed people infected with SARS-CoV-2 in that region in that time).

There's the other angle of selective outrage. The case for lockdowns was being promoted based on, amongst other things, the idea that PCR tests have a false positive rate of exactly zero, always, under all conditions. This belief is nonsense although I've encountered wet lab researchers who believe it - apparently this is how they are trained. In one case I argued with the researcher for a bit and discovered he didn't know what Ct threshold COVID labs were using; after I told him he went white and admitted that it was far too high, and that he hadn't known they were doing that.

Gellman's demands for an apology seem very different in this light. Ioannidis et al not only took test FP rates into account in their calculations but directly measured them to cross-check the manufacturer's claims. Nearly every other COVID paper I read simply assumed FPs don't exist at all, or used bizarre circular reasoning like "we know this test has an FP rate of zero because it detects every case perfectly when we define a case as a positive test result". I wrote about it at the time because this problem was so prevalent:

https://medium.com/mike-hearn/pseudo-epidemics-part-ii-61cb0...

I think Gellman realized after the fact that he was being over the top in his assessment because the article has been amended since with numerous "P.S." paragraphs which walk back some of his own rhetoric. He's not a bad writer but in this case I think the overwhelming peer pressure inside academia to conform to the public health narratives got to even him. If the cost of pointing out problems in your field is that every paper you write has to be considered perfect by every possible critic from that point on, it's just another way to stop people flagging problems.


Ioannidis corrected for false positives with a point estimate rather than the confidence interval. That's better than not correcting, but not defensible when that's the biggest source of statistical uncertainty in the whole calculation. Obviously true zero can be excluded by other information (people had already tested positive by PCR), but if we want p < 5% in any meaningful sense then his serosurvey provided no new information. I think it was still an interesting and publishable result, but the correct interpretation is something like Figure 1 from Gelman's

https://sites.stat.columbia.edu/gelman/research/unpublished/...

I don't think Gelman walked anything back in his P.S. paragraphs. The only part I see that could be mistaken for that is his statement that "'not statistically significant' is not the same thing as 'no effect'", but that's trivially obvious to anyone with training in statistics. I read that as a clarification for people without that background.

We'd already discussed PCR specificity ad nauseam, at

https://news.ycombinator.com/item?id=36714034

These test accuracies mattered a lot while trying to forecast the pandemic, but in retrospect one can simply look at the excess mortality, no tests required. So it's odd to still be arguing about that after all the overrun hospitals, morgues, etc.


> So Wikipedia is corrupt as per usual. Grokipedia's article is significantly less biased and more accurate.

I hope this was sarcasm.


I would hope the same. But knowing Wikipedia I'm afraid it isn't.

Does the IFR matter? The public thinks lives are infinitely valuable. Lives that the public pays attention to. 0.1% or 1%, it doesn’t really matter, right, it gets multiplied by infinity in an ROI calculation. Or whatever so called “objective” criteria people try to concoct for policymaking. I like Ioannidis’s work, and his results about serotypes (or whatever) were good, but it was being co-opted to make a mostly political policy (some Republicans: compulsory public interaction during a pandemic and uncharitably, compulsory transmission of a disease) look “objective.”

I don’t think the general idea of co-opting is hard to understand, it’s quite easy to understand. But there is a certain personality type, common among people who earn a living by telling Claude what to do, out there with a defect to have to “prove” people on the Internet “wrong,” and these people are constantly, blithely mobilized to further someone’s political cause who truly doesn’t give a fuck about them. Ioannidis is such a personality type, and as you can see, a victim.


> The public thinks lives are infinitely valuable.

In rhetoric, yes. (At least, except when people are given the opportunity to appear virtuous by claiming that they would sacrifice themselves for others.)

In actions and revealed preferences, not so much.

It would be rather difficult to be a functional human being if one took that principle completely seriously, to its logical conclusion.

I can't recall ever hearing any calls for compulsory public interaction, only calls to stop forbidding various forms of public interaction.


The SHOW UP act was congressional republicans forcing the end of telework for federal workers, not for any rational basis. Teachers in Texas and Florida, where Republicans run things, staff were faced with show up in person (no remote learning) or quit.

Yeah, and lucky you! You gain all this insight b/c you logged into Hacker News on the very day someone posted the truth! What a coincidence!

"US firm confirms it will comply with US law if asked."

Unless that's a data privacy or monopoly related. Then they won't.

And tax is not theft.


BMW sells baseball caps. It's a luxury fashion brand.


What proportion of its revenue is from fashion?


VW, notoriously, sells sausages!


> That's millions and millions of dollars in savings year over year compared the cost of paying humans, and it gets rid of the perverse incentives that lead to things like sub-standard materials being used, so that you have to replace materials every year in order to keep the union teams employed doing overpriced roadwork.

How does it get rid of perverse incentives? The unionised human workers use sub-standard material so they can do (and charge for) the same repair next year, but the owners of the robots do not have the very same incentive?

Is it because humans are mendacious, fallible, and corrupt, while Elon is honest, reliable, and not motivated by money?


Hong Kong (while obviously quite different) is similar in the sense that many urban areas are fairly compact and walkable, and regions of very high density housing alternate with parks, forests, playgrounds, zoological gardens, water front, etc., so that groceries, restaurants, public transport, but also recreational areas are never more than a few minutes away.

Quite a contrast to suburban sprawl.


In Mandarin you're also well advised to learn the measure word for any noun along with it.


Well, you can. But if you're just learning the language, you're going to struggle to find words that don't use a predictable one. And if you do stumble across one of those, and you use 个 instead, people may not even notice.

No one is out there watching to make fun of you if you count horses in 只 instead of 匹.


I find eliminating doubt/confusing when speaking really helps.

I used to have this mindset about german, oh who cares about grammar. it really limited me at upper intermediate level.


That is not something that will happen with measure words. Ignoring some grammar will limit you. This is not such a case. Ignoring semantics-free measure words will make you sound like a native speaker.

Chinese measure words are not something that happens on a noun-by-noun basis. Memorizing them as part of a noun is not a well-chosen approach. Most often this will saddle you with the burden of separately memorizing "pass" and "passed" as independent forms of the verb, while providing comparable benefits.

You might have noticed that my comment above suggested that

(a) the default measure word is 个;

(b) the default measure word for a horse is not 个.

Do you know why?


ohh that is a good idea!!


Not an expert, but some "to IPA" websites I checked transcribes "the huge human" as "ðə hjuːʤ ˈhjuːmən", but "ich" (voiceless palatal fricative) as "iç" (and "ach" (voiceless velar fricative) as "ax")).

ç != hj

ETA: Wikipedia notes:

> The sound at the beginning of huge in most British accents is a voiceless palatal fricative [ç], but this is analysed phonemically as the consonant cluster /hj/ so that huge is transcribed /hjuːdʒ/. As with /hw/, this does not mean that speakers pronounce [h] followed by [j]; the phonemic transcription /hj/ is simply a convenient way of representing the single sound [ç].

So maybe ç == hj.

https://en.wikipedia.org/wiki/English_phonology#cite_note-18


In his book The Chinese Language: Fact and Fantasy, John DeFrancis calls the English orthography the worst among the alphabetic ones, and Japanese the worst among the logographic ones.


Re English, maybe among major languages. Faroese orthography is bad in phonetic terms, but Faroese is not a well known language. I'm sure other smaller languages have even worse systems than English.

Among the major languages, French is also pretty awful. Its orthography is much less practical than Spanish or Italian.

Tibetan orthography is notoriously bad, but is neither alphabetic nor logographic. This is a result of Tibetan changing a great deal since it was first transcribed.


I find those Cantonese words ending in -p, -t, -k harsher than mellifluous Mandarin.


I'm not a great fan of the sibilant sounds in Mandarin. Which to be fair is pretty rich coming from an English speaker.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: