Just throwing my 2c into the well, as someone who used to be highly "pro-science" but lost confidence in much of academia and the validity of scientific research in general after starting a PhD and seeing how the sausage is made.
The biggest problem science is facing is not an external threat from a rabble of ignorant science deniers, but the complete degradation of quality within the scientific institutions themselves.
Most research is flawed or useless, but published anyway because it's expedient for the authors to do so. Plenty of useful or interesting angles are not investigated at all because doing so would risk invalidating the expert status of the incumbents. Of the science that is communicated to the public (much of which is complete horseshit), the scientists themselves are often complicit in the misrepresentation of their own research as it means more of those sweet, sweet grants. The net result is the polished turd that is scientific research in the 21st century.
"Educated" people can say what they want about how important it is to believe in the science and have faith in our researchers and the institutions they work for.
The fact remains that if one were to assume that every single scientific discovery of the last 25 years was complete bullshit, they'd be right more often than they were wrong.
> Most research is flawed or useless, but published anyway because it's expedient for the authors to do so.
For the record I now know a professor at one of the premier institutions in the world, who is a total fraud. Their research was fraudulent in grad school. Their lab mates tried to raise concerns and nothing happened. That person graduated, with everyone on the committee knowing the issues. Then they got a premier post-doc position. People in their lab (who I caught up with at a conference) mentioned their work was terrible. Now they’re a professor at a top tier university.
Along the way, everyone knew and when people tried to bring up concerns higher ups in the institution suppressed the knowledge. Mostly because their fraudulent work was already cited tens of times.
This wasn’t directly in my field, but I saw it go down and followed it.
In my day job, I just throw out papers that don’t publish datasets and code. Most cs work is equally useless. It’s all a farce.
EDIT: I recommend the book for some insights - “Rigor Mortis: How Sloppy Science Creates Worthless Cures, Crushes Hope, and Wastes Billions by Richard Harris”
> I just throw out papers that don’t publish datasets and code
There was an interesting research paper that claimed you could up your IQ with a computer game called dual n back and a lot of paper tried to replicate it, but my mind was blown when I realized none of them used the same code. None of them actually shared the code they used for their research and then they all claimed they were testing the same thing when they obviously weren't.
To me, refusing to share the source code to a program that could be used to replicate your research seems like a big middle finger to science itself. It shows a total disregard to study replication.
Exactly, it'd be like omitting the "methodology" section of the paper. There's no way to verify or prove wrong a paper if you don't know how they reached their conclusions.
Even more than sharing code, I think the key is to share raw data, postprocessed data and describe methods as exceptionally clear equations and pseudocode.
Some well intentioned papers share code which is hard to run years afterwards. It's sometimes much simpler to reimplement things that to get the code to run if they are described accurately.
Besides, some articles hide ugly things, nasty tricks and lies in the code, which make their results a lot less believable and valid. Being super upfront about models in terms of equations and pseudocode is important.
Of course, we should also have standards to make code reproducible. Perhaps depositing a VM with everything ready to run.
> Perhaps depositing a VM with everything ready to run.
The danger then is that the VM has so much undocumented complexity that if anything goes wrong, or goes "well" when it shouldn't, no one can explain why. Which also reintroduces a vector to hide nasty tricks.
this. the point of sharing reproducible steps and not the experiment itself is that it can be fully reproduced independently. not just independently verify that the result show what the paper claims.
Results that haven't been independently replicated are suspect. There are just too many factors that can lead an experiment to give some results that are not transferrable or not relevant.
The worst aspect of this is the lack of will or funding to replicate, replicate, and replicate again all significant results that get published. Post-processed data can be altered, but a TB of raw data is meaningless as well if it hasn't been produced properly, has been obfuscated, or is weirdly formatted.
Data availability is a red herring for the vast majority of the science being made right now (almost everything that does not depend on a multi-millions dollars experiment). If data availability is an end in itself, we would just have moved the goalposts and have a data quality problem instead of a reproducibility problem.
> Perhaps depositing a VM with everything ready to run.
Yes, this is hugely important. We need clearer requirements for what constitutes properly 'published' data, code and methods. It should include all raw data (both used and discarded) as well as complete snapshots of each preliminary and intermediate processing step in a time-stamped chronology.
This is an area where the expertise and best practices of software development, documentation and source control could help inform standards and tooling which would dramatically improve the quality of scientific publishing and replication.
> Some well intentioned papers share code which is hard to run years afterwards.
Umm, not really? How many years are we talking about here? Because even Cobol is runnable. Sure some OS related quirks may need changing but getting the raw source is much more likely to expose a subtle flaw in the researcher's method than just the equations. For example, an incorrect negative check (to string vs < 0) or an off by one error.
Don't think COBOL; think Python 2.1 (not 2.2) with this specific version of PIL, and some of the code implemented as a CPython module (for speed) in C that only compiles properly with one specific EGCS fork that adds support for IA-64. Parts of the code are written in slightly-buggy inline IA-64 assembly, but it's okay because after a system call, that particular operating system makes sure those registers are zeroed each loop iteration, if Python's line-buffering its `print`s so that the system call gets consistently run.
Also, the Python script crashes on loading the data files unless there are two (but not more than two) spaces in its full file path. This is not documented anywhere.
Yeah. I can easily run FORTRAN code from my PhD supervisor's PhD supervisor, written way back in the 1980s, but I cannot run some of the Python scripts written by a post-doc 10 years ago. It's a mess of abandoned libraries and things that work only with some specific versions of some libraries, and compiled modules that rely on the dodgy behaviour of an old compiler. Perl seems to be better, if only because people used to rely much less on external libraries.
But properly running code is not the solution either. I can count on the fingers of one had the downloads of some of the codes we've published (easily compilable modern Fortran), and AFAIK nobody ever published anything using them. Having a multitude of codes available does not mean much if nobody runs them, assuming they can be compiled. And I would guarantee that none of the scientists who download these codes would be able to understand what they do in any detail.
Indeed, that's not proper science, too many moving parts. If it cannot be easily replicated by anyone at any time, it is just an experiment which would need more refinement to get published. No experiment should be accepted if it requires a Rube-Goldberg machine.
Sharing code and data can be as harmful to science as it is beneficial. You don’t want to reuse the same instruments that conducted the last experiment to validate it.
Independent replication is inherently expensive, but also critical to the field at large. Some sort of code vault that releases the code and data after a period could be a solid compromise.
Well yes, but no. You want to avoid any systematic bias, which if you reuse tooling/instruments/code, you run the risk of, but source code is also something which can be peer-reviewed.
Reproducing the code base from scratch just isn't going to be tractable for large scientific pieces of software (e.g. CERN).
Better to have the source code and dig through it carefully as part of the review and replication than insist everyone write their own cleanroom copy just to avoid systematic bias.
Suppose someone decided to build another LHC, but to save money they would use exactly the same design, constriction crew, and code used to build the first one. Would you consider that a perfectly reasonable choice or would it seem risky?
That said I am all for peer reviewing code, but that’s not where this ends up. People are going to reuse code they don’t bother to dig into because people are people. The obvious next step would be to reject any paper that’s reusing code from a different team, but that’s excessive.
That said reusing code or data is much safer than reusing code and data.
I’m going to be upfront and say I don’t understand your position You seem to be making a number of questionable assumptions: 1) That people would blindly reuse code to the harm of science, including on a multi-billion dollar project like an LHC. That’s not likely to ever happen.
2) That rejecting those who plagiarize others’ code in a peer-review process is somehow excessive or problematic.
I can’t begin to understand where you are coming from.
I'm not sure if I have exactly the same concerns as the person you're replying to, but I've definitely noticed problems coming from this form of "replication" through just re-running existing code. I don't think that means code shouldn't be released, but I think we should be wary of scientific code being reused verbatim too frequently.
If existing code is buggy or doesn't do what its authors think it does, this can be a big problem. Even if the idea is correct, whole swathes of downstream literature can get contaminated if authors didn't even attempt to implement an algorithm independently, but just blindly reused an existing docker image that they didn't understand. I consider that poor practice. If you claim that your work is based on Paper X, but what you really mean is that you just ran some Docker image associated with Paper X and trusted that it does what it says it does, instead of independently reimplementing the algorithm, that is not replication.
In older areas of science the norm is to try to replicate a paper from the published literature without reuse of "apparatus". For example, in chemistry you would try to replicate a paper using your own lab's glassware, different suppliers for chemicals, different technicians, etc. Analogies here are imperfect, but I would consider downloading and re-running an existing docker image to be dissimilar from that. It's more like re-running the experiment in the first lab rather than replicating the published result in a new lab. As a result it can, like re-running a chemistry experiment in the same lab, miss many possible confounds.
Of course, you do have to stop somewhere. Chemistry labs don't normally blow their own glass, and it's possible that the dominant way of making glass in your era turns out to be a confound (this kind of thing really does happen sometimes!). But imo, on the code side of things, "download and rerun a docker image" is too far towards the side of not even trying to replicate things independently.
For that reason a special multimedia sharing tool was created, which is now called WWW. So the international physicist community could share their code, papers and CAD designs for the LHC experiments. Quite a success for them, but academia is still resistant
It's not reasonable, but for different reasons. Building a carbon copy of LHC would not add anything new in the way of science. A better example would be LIGO. Adding another gravity wave detector of the exact same spec but halfway around the world would be fantastic, because it increases our triangulation ability, and keeping the engineering, hardware, and software the same reduces cognitive load of the scientists running it. Yes that means any "bug" in the first is present in the second, but that also means you have a common baseline. In fact there will inevitably be discrepancies in implementation (no two engineering projects are identical, even with the same blueprint), and you can leverage that high degree of similarity to reduce the search space (so long as subsystems are sufficiently modular, and the software is a direct copy).
The original comment was with respect to some n-back training program. There's so many other potential places of bias in an experiment like that, that you'd be foolish not to start with the exact same program. If an independent team uses a different software stack, and can't replicate, was it the different procedure, software, subjects, or noise?
The first step in scientific replication is almost always, "can the experiment be replicated, or was it a fluke?" In this stage, you want to minimize any free variables.
It's a matter of (leaky) abstractions. If I'm running a chemistry replication, I don't need the exact same round bottom flask as the original experiment; RBFs are fungible. In fact I could probably scale to a different size RBF. However, depending on the chemistry involved, I probably don't want to run the reaction in a cylindrical reactor, at least not without running in the RBF first. That has a different heating profile, which could generate different impurities.
Likewise, I probably don't need the exact same make/model of chromatograph. However, I do want to use the same procedure for running the chromatograph.
Ideally, that would be a concern of the peer review. When I finished my undergraduate degree, I had to present a paper describing a program. I had to show that program working as part of my presentation. Anyone reading my "paper" and knowing I got a passing grade already knows the code I supplied works with the data I supplied, so I don't think it's that important for them to run it themselves.
Essentially, this would be like trying to reproduce a paper and starting by checking if their mathematical apparatus is correct. It's not useless, and it can help detect fraud or just plain bad peer review of course, but I wouldn't call that an attempt to reproduce their results per se.
It could be a nice quick sanity check, in the sense that if they've completely lied or you've completely misunderstood how to use the program, you won't get the same results. So it could tell you that you shouldn't even bother trying to replicate their claims. But there's a risk that people might mistake re-running the code for reproducing the findings of the paper.
The paper is entered into the scientific record, not the code, which will inevitably become obsolete (old language, old frameworks, implementation details tied to old hardware or operating systems, maybe some source code will be lost, github will go out of business). If the code is necessary, then crucial details have been left out of the paper, so it is not reproducible (although there are some journals that let you submit code as an artifact).
The risk here is that the original code contains a bug that is inadvertently reproduced by the replicating scientists after reading it. It can easily happen in some fields that some computation looks very plausible in code, but is actually incorrect.
"To me, refusing to share the source code to a program that could be used to replicate your research seems like a big middle finger to science itself."
I used to work (current PhD student) in HPC (switched specialties) and I was extremely surprised that not only did these people not share code (DOE actually encourages open souring code), but they would not even do so upon request. Several times I had to get my advisor to ask the author's advisor. Several times I found out why I couldn't replicate another person's code. It is amazing to me that anyone in CS does not share code. It is trivial to do and we're all using git in some form or another anyways.
Sharing source code for a HEP experiment is not that easy or possible at sometimes. A lot of stuff is done by different group and whole framework used and overall a lot of raw data collected and reconstructed involved. To analyze those even if available you willl need a lot of people with a lot of resources. So even it made available (which would be a lot of efforts itself) it wouldn't make much sense for replication purpose.
So what? Tough shit! It's still simply the only way to replicate or audit something.
If it's hard or tedious... well so what? So is life. That's most of science is exactly that rigor.
You might as well say no one else can grow a potato because of all the work your farm had to do all year to arrive at your crop. Yes, any other farmer will have to do all the same stuff. It's an utterly unremarkable observation.
How is it possible to peer review a paper when the only people qualified to do so are the one's involved in the research. Seems like a massive issue. Don't want to cast to much shade here, but there's a fair number of people that called out high energy physics for having problematic methodology. See Constructing quarks by Andrew Pickering. Ultimately, it should be CERNS job to release the data, yes even if it is terabytes in size, because that's the whole point of science.
>How is it possible to peer review a paper when the only people qualified to do so are the one's involved in the research. Seems like a massive issue.
Peer review isn't what it's chalked up to be. Basically it's two other people in the same field saying "There's no blatantly obvious issues with this paper" and even that isn't always guaranteed. Reviewers don't make any efforts to actually replicate the research, crunch the data, etc.
Re running the authors code with their data will likely just repeat any methodological issues they had (either accidentally or fraudulently).
In medical studies, one level is to reevaluate the data from the same set of patients to see if any bias or errors wasn't included, perhaps with a different or newer statistical idealogy. The best is for the study to be repeated with a fresh set of patient data to see if the underlying conclusions were valid.
Reading this, I am reminded of a quote by Edsger W. Dijkstra from 1975 [1]:
> In the good old days physicists repeated each other's experiments, just to be sure. Today they stick to FORTRAN, so that they can share each other's programs, bugs included.
"Re running the authors code with their data will likely just repeat any methodological issues they had"
Yes, but that's useful too.
COVID-19 lockdowns largely kicked off in the west due to an epidemiological model from Imperial College London, written over a period of many years by the now notorious Professor Neil Ferguson.
When his team uploaded a pre-print of his paper and started sending it to government ministers, the code for his model wasn't available. They spent months fighting FOIA requests by claiming they were about to release it, but just had to tidy things up a bit first. When the code was finally uploaded to GitHub the world discovered the reason for delay: the model was a 15,000 lines of C trash fire of race conditions, floating point accumulation errors and memory corruptions in which basically every variable was in global scope and with a single letter name. It was a textbook case of how not to write software. In fact it was a textbook case of why variable names matter because one of the memory corruptions was caused by the author apparently losing track of what a variable named 'k' was meant to contain at that point in the code.
Not surprisingly, the model didn't work. Although it had command line flags to set the PRNG seeds these flags were useless: the model generated totally different output every time you ran it, even with fixed seeds. In fact hospital bed demand prediction changed from run to run by more than the size of the entire NHS Nightingale crash hospital building programme, purely due to bugs.
And as we now know the model was entirely wrong and the results were catastrophic. Lockdowns had no impact on mortality. There are many people who looked at the data and saw this but here's just the latest meta-analysis of published studies showing that to be true [1]. They destroyed the NHS which now has a cancer treatment backlog and pool of 'missing' patients so large that it cannot possibly catch up, meaning people will die waiting for treatment from the system they supported with their taxes for their entire lives. They destroyed the tax base, leaving the government with an unpayable debt that can be eliminated only via inflation meaning they will have soon destroyed people's savings too. It's just a catastrophe of incompetence and hubris.
The incorrectness of the model wasn't due only to programming bugs. The underlying biological assumptions were roughly GCSE level or a bit lower (GCSE is the exams you take at 15/16 in the UK), and it's quite evident that high school germ theory is woefully incomplete. In particular it has nothing to say on the topic of aerosol vs droplet transmission, which appears to be a critical error in the way these models are constructed.
Nonetheless, even if the assumptions were correct such a model should never have been used. Anyone outside the team who had access to the original code would have seen this problem immediately and could have sounded the alarm, but:
1. Nobody did have access.
2. ICL lied to the press by claiming the code had been published and peer reviewed years earlier (so where was it?)
3. Then when it was revealed the model wasn't reproducible, they teamed up with another academic at Cambridge and lied again by publishing a report+press release claiming it actually was reproducible and claims otherwise were misinformation.
4. And then the journal Nature and the BBC repeated these false claims of reproducibility.
All whilst anyone who looked at the GitHub issues list could see it filling up with determinism bugs. If you want citations for all these claims look here, at a summary report I wrote for a friendly MP [2].
So. It's good that the Royal Society is telling people not to engage in censorship, but their justifications for taking that stance reveal they're still living in lala land. By far the deadliest and most dangerous scientific misinformation throughout COVID has come from the formal institutions of science themselves. You could sum all the Substacks together and they would amount to 1% of the misinformation that has been published by government-backed "scientists", zero of whom have been banned from anything or received any penalty whatsoever. For as long as the scientific institutions are in denial about how utterly dishonest their culture has become we will continue to see a weird inversion in which random outsiders point out basic errors in their work and they respond by yelling "disinformation".
The meta-study you cite has quite bizarre conclusions. It basically states that isolation behavior is highly effective at preventing Covid deaths, but that lockdowns were a bad predictor of such behavior in people. But, they still seem to attribute huge economic impact to the lockdowns, not the pandemic and natural response (voluntary behavioral changes) to it.
Overall it seems that they would have been better served by adding lockdown compliance to their models, which would likely explain much of the difference. It's absurd to claim that lockdowns don't work when a country like Vietnam (100 million people, first detected cases of COVID-19 community spread outside China) had <100 total COVID-19 deaths in 2020. Strict targeted lockdowns, strict isolation requirements after every identified case, for contacts up to the third degree ( contact of someone who was a contact of someone who came in contact with a patient) - these all must have played a role, and any study that fails to explain such extreme success is simply flawed.
Are you sure it states that? Maybe you mean this paragraph:
"If this interpretation is true, what Björk et al. (2021) find is that information and signaling is far more important than the strictness of the lockdown. There may be other interpretations, but the point is that studies focusing on timing cannot differentiate between these interpretations. However, if lockdowns have a notable effect, we should see this effect regardless of the timing, and we should identify this effect more correctly by excluding studies that exclusively analyze timing."
"It's absurd to claim that lockdowns don't work when a country like Vietnam (100 million people, first detected cases of COVID-19 community spread outside China) had <100 total COVID-19 deaths in 2020"
Many countries claim an improbably low level of COVID death. That doesn't mean their numbers can be taken at face value, although poorer countries do seem to be less worse hit simply because they have far fewer weak, obese and elderly people to begin with.
It's certainly not absurd to claim lockdowns don't work. You're attempting to refute a meta-analysis of studies looking at all the data with the example of a single country. That's meaningless. A single data point can falsify a theory but it cannot prove a theory. To prove lockdowns work there has to be a consistent impact of the policy.
It is completely absurd. "Lockdowns" is shorthand for "people not coming into contact with other people in a way that spreads the virus". Because the virus cannot teleport through walls, lockdowns necessarily prevent transmission.
This is the sort of mis-use of logic that has led to so many problems.
"Lockdowns" is shorthand for "people not coming into contact with other people in a way that spreads the virus"
It's shorthand for a set of government policies that were intended to reduce contact, not eliminate it, because "not coming into contact with other people" is impossible. People still have to go to shops, hospitals, care homes, live with each other, travel around and so on even during a lockdown.
Your belief that it's "completely absurd" to say lockdowns don't affect mortality is based on the kind of abstract but false reasoning that consistently leads epidemiologists astray. Consider a simple scenario in which lockdowns have no effect that's still compatible with germ theory - you're exposed to the virus normally 10 times per week every week. With lockdowns that drops to 5 times a week. It doesn't matter. Everyone is still exposed frequently enough that there will be no difference in outcomes.
"Because the virus cannot teleport through walls, lockdowns necessarily prevent transmission"
Viruses can in fact teleport through walls: "The SARS virus that infected hundreds of people in a 33-story Hong Kong apartment tower probably spread in part by traveling through bathroom drainpipes, officials said yesterday in what would be a disturbing new confirmation of the microbe's versatility."
you're exposed to the virus normally 10 times per week every week. With lockdowns that drops to 5 times a week.
In this scenario, lockdowns work fine, but are insufficient! Perhaps with stringent use of N95 masks during contactless food ration delivery it can be reduced to 0.05 times per week. Then the pandemic ends and we go back to normal.
Also, if you only go out half as often, that's not exactly a lockdown either. A lockdown is people not coming into contact with other people in a way that spreads the virus. It's not people still often coming into contact with other people in a way that spreads the virus but not as often as before.
You don't appear to realize that food gets to your front door via a large and complex supply chain that involves many people doing things physically together at almost every point. Your definition of a lockdown is physically impossible even for cave men to sustain, let alone an advanced civilization. It isn't merely a matter of "works but insufficient".
This kind of completely irrational reasoning is exactly why lockdowns are now discredited. The fact that the response to "here's lots of data showing that lockdowns didn't work" is to demand an impossible level of lockdown that would kill far more people than COVID ever could simply through supply chain collapse alone, really does say it all.
> A lockdown is people not coming into contact with other people in a way that spreads the virus. It's not people still often coming into contact with other people in a way that spreads the virus but not as often as before.
That might be a definition issue. Here in Germany, we had several lockdowns. Many (most?) countries would say we had no lockdown at all.
The question isn’t whether lockdowns with compliance work, the question is how effective is lockdown as a policy. If you create a lockdown policy, what is the impact of that?
> First, people respond to dangers outside their door. When a pandemic rages, people believe in
social distancing regardless of what the government mandates. So, we believe that Allen (2021)
is right, when he concludes, “The ineffectiveness [of lockdowns] stemmed from individual
changes in behavior: either non-compliance or behavior that mimicked lockdowns.”
> Third, even if lockdowns are successful in initially reducing the spread of COVID-19, the
behavioral response may counteract the effect completely, as people respond to the lower risk by
changing behavior
Since it is an obvious consequence of the germ theory of disease that isolation stops the spread of disease, the only real question is if lockdowns are efficient at enforcing isolation (and in what conditions, with what costs etc). At best, the paper concludes that they are not, and that government propaganda about the risks of the disease works as well or better.
> You're attempting to refute a meta-analysis of studies looking at all the data with the example of a single country. That's meaningless. A single data point can falsify a theory but it cannot prove a theory. To prove lockdowns work there has to be a consistent impact of the policy.
I would say the null hypothesis would be that lockdowns do work, by relatively simple, almost mechanical, principles (lockdown forces isolation, isolation means the virus can't physically get from one person to another).
And there is basically no country that hasn't seen significant isolation that did well in the pandemic. Whether that isolation was caused by efective, if perhaps draconic lockdowns (such as in China or Vietnam) or by cultural norms and self preservation (such as in Finland or Norway), this remains true.
The real exceptions are countries that successfully isolated themselves from the outside world, and thereafter isolated only those carrying the virus, through broder closures and strict quarantine requirements plus testing - Taiwan, New Zealand are examples of such.
For this type of response, using broad statistical studies that equate very different ground-level phenomena (lockdowns varied wildly in form and in the degree of compliance, but the paper ignores all that in its quantitative data) to make it look good on a graph are mostly misleading - but what more can you expect from three economists dabbling their hand at epidemiology and sociology?
"The model generated totally different output every time you ran it, even with fixed seeds." - I remember seeing code takedowns of the model from anti-lockdown people who repeatedly cite this issue.
But there is a valid reason for this to happen, and it doesn't mean bugs in the code. If the code is run in a distributed way (multiple threads, processes or machines), which it was, the order of execution is never guaranteed. So even setting the seed will produce a different set of results if the outcomes of each separate instance depend on each other further in the computation.
There are ways to mitigate this, depending on the situation and the amount of slowdown that's acceptable. Since this model was collecting outcomes to create a statistical distribution, rather than a single deterministic number, it didn't need to.
The fact the model was also drawing distributions also will be why different runs produced possibly vastly different results. Those results would be at different ends of a distribution. Only distributions are sampled and used, not single numbers.
Regarding the GCSE level comment, my concern was the opposite, that the model was trying to model too much, and that inaccuracies would build up. No model is perfect (including this one) and the more assumptions made the larger the room for error. But they validated the model with some simpler models as a sanity check.
My view on criticisms of the model were that they were more politically motivated, and the code takedowns were done by people who may have been good coders, but didn't know enough about statistical modeling.
> "The model generated totally different output every time you ran it, even with fixed seeds." - I remember seeing code takedowns of the model from anti-lockdown people who repeatedly cite this issue.
But there is a valid reason for this to happen, and it doesn't mean bugs in the code. If the code is run in a distributed way (multiple threads, processes or machines), which it was, the order of execution is never guaranteed.
Then there's literally no point to using PRNG seeding. The whole point of PRNG seeding is so you can define some model in terms of "def model(inputs, state) ->" output, and get the "same" output for the same input. I put "same" in quotes because defining sameness on FP hardware is challenging. But usually 0.001% relative tolerance is sufficiently "same" to account for FP implementation weirdness.
If you can't do that, then your model is not a pure function, in which case setting the seed is pointless at best, and biasing/false sense of security in the worst case.
As you mention, non-pure models have their place, but reproducing their results is very challenging, and requires generating distributions with error bars - you essentially "zoom out" until you are a pure function again, with respect to aggregate statistics.
It does not sound like this model was "zoomed out" enough to provide adequate confidence intervals such that you could run the simulation, and statistically guarantee you'd get a result in-bounds.
I reckon the PRNG seeding in such a case might be used during development/testing.
So, run the code with a seed in a non distributed way (e.g. in R turn off all parallelism), and then the results should be the same in every run.
Then once this test output is validated, depending on the nature of the model, it can be run in parallel, and guarantees of deterministic behaviour will go, but that's ok.
I didn't develop the model, so can't really say anything in depth beyond the published materials.
I just found it odd at the time how this specific detail was incorrectly used by some to claim the model was broken/irredeemably buggy.
Edit: Actually, in general, perhaps there's one other situation where the seed might be useful, assuming you have used a seed in the first place. Depending on the distributed environment, there's no guarantee that the processes or random number draws will be run in the same order. But it might be that in most cases they're in the same order. This might bias the distribution of the samples you take. So you might want to change the seed on every run to protect yourself from such nasty phantom effects.
My understanding is the bugginess is due to* unintended* nondeterminism, in other words, things like a race condition where two threads write some result to the same memory address, or singularities/epsilon error in floating point calculations leading to diverging results.
Make no bones about it, these are programming faults. There's no reason why distributed, long-running models can't produce convergent results with a high degree of determinism given the input state. But this takes some amount of care and attention.
> So you might want to change the seed on every run to protect yourself from such nasty phantom effects.
That's a perfect example of what I mean where a seed is actually worse. If you know you can't control determinism, then you might as well go for the opposite: ensure your randomness is high quality enough such that it approximates a perfect uniform distribution. Adding a seed here means you are more likely to capture the true distribution of the output.
The other takedown reviews focused on the fact that there was non-determinism despite a seed, without understanding that's not necessarily a problem.
Agreed on the second point about not having a seed, but I added the "assuming you have used a seed" caveat because sometimes people do use the seed for some reproducible execution modes (even multi-thread/process ones), which are fine, and it's just easier to randomly vary the seed generation rather than remove it all together when running in a non-deterministic mode.
"There are ways to mitigate this, but since the model was collecting outcomes to create a statistical distribution, rather than a single deterministic number, it didn't need to."
This is the justification the academics used - because we're modelling probability distributions, bugs don't matter. Sorry, but no, this is 100% wrong. Doing statistics is not a get-out-of-jail-free card for arbitrary levels of bugginess.
Firstly, the program wasn't generating a probability distribution as you claim. It produced a single set of numbers on each run. To the extent the team generated confidence intervals at all (which for Report 9 I don't think they did), it was by running the app several times and then claiming the variance in the results represented the underlying uncertainty of the data, when in reality it was representing their inability to write code properly.
Secondly, remember that this model was being used to drive policy. How many hospitals shall we build? If you run the model and say 10, and then someone makes the graph formatting more helpful, reruns it and it now it says 4, that's a massive real-world difference. Nobody outside of academia thinks it's acceptable to just shrug and say, well it's just probability so it's OK for the answers to just wildly thrash around like that.
Thirdly, such bugs make unit testing of your code impossible. You can't prove the correctness of a sub-calculation because it's incorporating kernel scheduling decisions into the output. Sure enough Ferguson's model had no functioning tests. If it did, they might have been able to detect all the non-threading related bugs.
Finally, this "justification" breeds a culture of irresponsibility and it's exactly that endemic culture that's destroying people's confidence in science. You can easily write mathematical software that's correctly reproducible. They weren't able to do it due to a lack of care and competence. Once someone gave them this wafer thin intellectual sounding argument for why scientific reproducibility doesn't matter they started blowing off all types of bugs with your argument, including bugs like out of bounds array reads. This culture is widespread - I've talked to other programmers who worked in epidemiology and they told me about things like pointers being accidentally used in place of dereferenced values in calculations. That model had been used to support hundreds of papers. When the bugs were pointed out, the researchers lied and claimed that in only 20 minutes they'd checked all the results and the bugs had no impact on any of them.
Once a team goes down the route of "our bugs are just CIs on a probability distribution" they have lost the plot and their work deserves to be classed as dangerous misinformation.
"My view on criticisms of the model were that it was more politically motivated"
Are you an academic? Because that's exactly the take they like to always use - any criticism of academia is "political" or "ideological". But expecting academics to produce work that isn't filled with bugs isn't politically motivated. It's basic stuff. For as long as people defend this obvious incompetence, people's trust in science will correctly continue to plummet.
If you check the commit history, you'll see that he quite obviously didn't work with the code much at all. Regardless, if he thinks the model is not worthless, he's wrong. Anyone who reviews their bug tracker can see that immediately.
> To the extent the team generated confidence intervals at all (which for Report 9 I don't think they did), it was by running the app several times and then claiming the variance in the results represented the underlying uncertainty of the data, when in reality it was representing their inability to write code properly.
Functionally, what's the difference? The output of their model varied based on environmental factors (how the OS chose to schedule things). The lower-order bits of some of the values got corrupted, due to floating-point errors. In essence, their model had noise, bias, and lower precision than a floating point number – all things that scientists are used to.
Scientists are used to some level of unavoidable noise from experiments done on the natural world because the natural world is not fully controllable. Thus they are expected to work hard to minimize the uncertainty in their measurements, then characterize what's left and take that into account in their calculations.
They are not expected to make beginner level mistakes when solving simple mathematical equations. Avoidable errors introduced by doing their maths wrong is fundamentally different to unavoidable measurement uncertainty. The whole point of doing simulations in silico is to avoid the problems of the natural world and give you a fully controllable and precisely measurable environment, in which you can re-run the simulation whilst altering only a single variable. That's the justification for creating these sorts of models in the first place!
Perhaps you think the errors were small. The errors in their model due to their bugs were of the same order of magnitude as the predictions themselves. They knew this but presented the outputs to the government as "science" anyway, then systematically attacked the character and motives of anyone who pointed out they were making mistakes. Every single member of that team should have been fired years ago, yet instead what happened is the attitude you're displaying here: a widespread argument that scientists shouldn't be or can't be held to the quality standards we expect of a $10 video game.
How can anyone trust the output of "science" when this attitude is so widespread? We wouldn't accept this kind of argument from people in any other field.
At the time, critics of the model were claiming the model was buggy because multiple runs would produce different results. My comment above explains why that is not evidence for the model being buggy.
Report 9 talks about parameters being modeled as probability distributions, i.e. its a stochastic model. I doubt they would draw conclusions from a single run, as the code is drawing a single sample from a probability distribution. And, if you look at the paper describing the original model (cited in report 9), they do test the model with multiple runs. On top of that they perform sensitivity analyses to check erroneous assumptions aren't driving the model.
I have spent time in academia, but I'm not an academic, and don't feel any obligation to fly the flag for academia.
Regarding the politics, contrast how the people who forensically examined Ferguson's papers were so ready to accept the competing (and clearly incorrect https://www.youtube.com/watch?v=DKh6kJ-RSMI) results from Sunetra Gupta's group.
Fair point about academic code being messy. It's a big issue, but the incentives are not there at the moment to write quality code. I assume you're a programmer - if you wanted to be the change you want to see, you could join an academic group, reduce your salary by 3x-4x, and be in a place where what you do is not a priority.
Your comment above is wrong. Sorry, let me try to explain again. Let's put the whole fact that random bugs != stochastic modelling to one side. I don't quite understand why this is so hard to understand but, let's shelve it for a moment.
ICL likes to claim their model is stochastic. Unfortunately that's just one of many things they said that turned out to be untrue.
The Ferguson model isn't stochastic. They claim it is because they don't understand modelling or programming. It's actually an ordinary agent-like simulation of the type you'd find in any city builder video game, and thus each time you run it you get exactly one set of outputs, not a probability distribution. They think it's "stochastic" because you can specify different PRNG seeds on the command line.
If they ran it many times with different PRNG seeds, then this would at least quantify the effect of randomness on their simulation. But, they never did. How do we know this? Several pieces of evidence:
2. The program is so slow that it takes a day to do even a single run of the scenarios in Report 9. To determine CIs for something like this you'd want hundreds of runs at least. You could try and do them all in parallel on a large compute cluster, however, ICL never did that. As far as I understand their original program only ran on a single Windows box they had in their lab - it wasn't really portable and indeed its results change even in single-threaded mode between machines, due to compiler optimizations changing the output depending on whether AVX is available.
3. The "code check" document that falsely claims the model is replicable, states explicitly that "These results are the average of NR=10 runs, rather than just one simulation as used in Report 9."
So, their own collaborators confirmed that they never ran it more than once, and each run produces exactly one line on a graph. Therefore even if you accept the entirely ridiculous argument that it's OK to produce corrupted output if you take the average of multiple runs (it isn't!), they didn't do it anyway.
Finally, as one of the people who forensically examined Ferguson's work, I never accepted Guptra's either (not that this is in any way relevant). She did at least present CIs but they were so wide they boiled down to "we don't know", which seems to be a common failure mode in epidemiology - CIs are presented without being interpreted, such that you can get values like "42% (95% CI 6%-87%)" appearing in papers.
I took a look at point 3. and that extract from the code check is correct. Assuming they did one realisation I was curious why. It would be unlikely to be an oversight.
"Numbers of realisations & computational resources:
It is essential to undertake sufficient realisation to ensure ensemble behaviour of a stochastic is
well characterised for any one set of parameter values. For our past work which examined
extinction probabilities, this necessitates very large numbers of model realizations being
generated. In the current work, only the timing of the initial introduction of virus into a country is
potentially highly variable – once case incidence reaches a few hundred cases per day, dynamics
are much closer to deterministic."
So looks like they did consider the issue, and the number of realisations needed is dependent on the variable of interest in the model. The code check appears to back their justification up,
"Small variations (mostly under 5%) in the numbers were observed between Report 9 and our runs."
The code check shows in their data tables that some variations were 10% or even 25% from the values in Report 9. These are not "small variations", nor would it matter even if they were because it is not OK to present bugs as unimportant measurement noise.
The team's claim that you only need to run it once because the variability was well characterized in the past is also nonsense. They were constantly changing the model. Even if they thought they understood the variance in the output in the past (which they didn't), it was invalidated the moment they changed the model to reflect new data and ideas.
Look, you're trying to justify this without seeming to realize that this is Hacker News. It's a site read mostly by programmers. This team demanded and got incredibly destructive policies on the back of this model, which is garbage. It's the sort of code quality that got Toyota found guilty in court of severe negligence. The fact that academics apparently struggle to understand how serious this is, is by far a faster and better creator of anti-science narratives than anything any blogger could ever write.
I looked at the code check. The one 25% difference is in an intermediate variable (peak beds). The two differences of 10% are 39k deaths vs 43k deaths, and 100k deaths vs 110k deaths. The other differences are less than 5%. I can see why the author of the code check would reach the conclusion he did.
I have given a possible explanation for the variation, that doesn't require buggy code, in my previous comments.
An alternative hypothesis is that it's bug driven, but very competent people (including eminent programmers like John Cormack) seem to have vouched for it on that front. I'd say this puts a high burden of proof on detractors.
He is unfortunately quite wrong, see below. I don't believe he could have reviewed the code in any depth because the bugs are both extremely serious and entirely objective - they're just ordinary C type programming errors, not issues with assumptions.
Also food and other required resources are similarly unable to teleport through walls, so the people involved in growing, transporting, preparing and delivering them to your door can't do "lockdown" like the minority who are able to work from home.
I have been adjacent to industrial and academic partnerships where both the university and company wanted to maintain the IP of the work and in their minds that extended to the software. Paper was published and the source code was closely guarded. I wondered how people could replicate the findings without the fairly complex system the researchers used.
> Along the way, everyone knew and when people tried to bring up concerns higher ups in the institution suppressed the knowledge.
As you are doing here. I don't see a name or any specifics anywhere in your comment. Of course not including any names or specifics also means you could be making it up for a good story. We have no way of knowing.
Presumably OP does not have hard evidence and is only relating an anecdote. There isn’t much of an incentive to waste time and resources fighting for academic integrity or some other high minded concept like this. Baring those incentives we’ll be left with anecdotes on forums and a continued sense of a diminished trust in academia.
Op sounded very confident that the persons allegedly involved _are_, in no uncertain terms, not just definitely total frauds but also definitely engaged in a giant fraud conspiracy that definitely goes all the way to the top. If what they meant was "someone told me once that..." they could have said that instead, but they've chosen to word things very very differently. At best they've drastically overstepped reasonable limits of what claims one is able to rightly make, and that assessment feels extremely generous.
Yes. Exactly. Thank you for sharing what I was going to share. Corruption exists where it is allowed by the people who act out of cowardice.
As an aside, I've worked with plenty of academics and while I sometimes thought their research area was stupidly low stakes, the only researchers that I thought were truly wasting time were the ones that had to do research for a medical degree. Basically forced research.
Now I went to a premier university and I'm friends with some smart cookies, but I don't buy for a second the overall theme of this comment chain. There is a reason the West is incredibly wealthy and it isn't because our best and brightest are faking it.
There is a lot less fakery in science than poorly-designed studies, misleading endpoints, underdocumented or incorrectly documented methods, and cargo-culting. The success of the process comes from having a good filtration process to sift through this body of work, and the idea that there will always be some people in the system doing actually good work.
That said, I have also witnessed plenty of low-level fraud: changing of dates to match documentation, discarding "outlier" samples without justification or even documentation, etc. Definitely enough to totally invalidate a result in some cases.
Your first comment was effectively "Everyone knows this person is a fraud and no one is willing to stand up and put a stop to it".
Your second comment was effectively "Well I don't know they are a fraud and I'm not going to be the one who upsets people by trying to put a stop to it".
I don't say this as a criticism of you. I say this a defense of the people you are criticizing. Stopping people like this takes a lot of work and often some personal risk. Most of us aren't willing to do it despite us pretending otherwise.
>Stopping people like this takes a lot of work and often some personal risk.
I don't recall where I first heard it, but the principle that it takes 10x the amount of energy to refute bullshit as it does to generate it certainly seems to apply in this case.
A post-doc of a collaborator of a professor of mine was once found to have doctored data. The professor and her collabs only found out when they looked at the images and found out that there were sections that seemed to have been copy/pasted. Getting the papers retracted was a gruelling process that (iirc) took over 3 years.
>The people in charge were given evidence, face much less risk, and it's their job to put a stop to it.
That is an awfully authoritative statement to make based off what OP shared.
But either way, this has been proven time and time again whether we are talking about simple corruption like OP mentioned or more serious forms of injustice. People will judge themselves based off motivations and others based off actions. We can excuse ourselves because of the personal inconvenience that acting will cause us. But other people don't get that luxury. They get criticized purely based off that lack of action because if we were in their situation surely we would do the right thing. Being honest about this is important step to actually fixing the system because we need to identify the motivations which lead to the lack of action in order to remove them and encourage action. Simply vilifying people for not acting accomplishes nothing.
> That is an awfully authoritative statement to make based off what OP shared.
Which part do you disagree with? Maybe the 'much' on much less risk?
'given evidence' is true unless OP made op the story. (And even if OP made up the story then we're judging fictional people and the judgements are still valid.)
I think 'less risk' goes part and parcel with being administration rather than someone lower rank reporting a problem.
And it seems clear to me that it's their job.
So, criticism. Which is not a particularly harsh outcome. And doesn't necessarily mean they made the wrong decision, but them providing a justification would be a good start.
Inaction is often excusable when it's not your job. It's always important to look at motivations, but vilification can also be appropriate when there's dereliction. Sometimes there are no significant motivations leading to lack of action, there's just apathy and people that shouldn't have been hired to the position.
>'given evidence' is true unless OP made op the story.
OP never mentioned evidence. People just "knew" this person was a fraud but there was no mention of the actual evidence of fraud. The closest thing to evidence is that "their work was terrible" but that isn't evidence of fraud.
>I think 'less risk' goes part and parcel with being administration rather than someone lower rank reporting a problem.
We have no idea the risks involved. It would be highly embarrassing for a prestigious school and/or instructor to admit that they admitted a fraud into their program. Maybe this isn't even the first time this has happened to these people. Would you want to be the person know for repeatedly being duped by frauds? Maybe that would that ruin your reputation more than looking the other way and letting this person fail somewhere else where you would not be directly tied to their downfall. It is also incredibly risky to punish this person without hard evidence as that can lead to a lawsuit.
These are not meant to be definitive statements. They are just hypothetical that show how we can't judge people's motivations without knowing a lot more about the situation.
This website is pretty easy to post on anonymously. It takes five seconds to make a throwaway and another three to post a name.
While I agree that there are frauds out there in academia and elsewhere, I have no reason to believe that you’re not yourself some sort of fraud. You’ve essentially posted “I have direct knowledge of [unspecified academic fraud in which somebody claimed to have direct knowledge of [unspecified academic fraudulent conclusion] but can’t back it up] but won’t back it up”
Your overall point is… what? Fraud of perpetuated by cowardice? Self interest? An overall sense of apathy towards the truth? Your comment could be construed as any of those.
There are people that love to spread fear, uncertainty and doubt without having to rely on being truthful. People that are intentionally misleading with the sole intent of leveraging people’s biases and emotions to confirm folks notions and whip people up into an artificially-created frenzy make statements yours.
Serious questions:
1. Do you actually give a shit about this big fraud you’ve brought up but not revealed?
And
2. Why did you post?
> 1. Do you actually give a shit about this big fraud you’ve brought up but not revealed?
To expose it directly here would likely have little effect generally, but would have an outsized effect personally (damaging trust). If it did have an impact it would likely expose those involved (many careers). I am not in the command chain, I have seen the evidence and it's overwhelming. But I'm not in a position to enact the requisite change.
That said, I don't actually give a shit about this particular case. It's widespread, insanely wide spread. Most studies / work cannot be replicated.
> 2. Why did you post?
As an anecdote to highlight something that I've seen. I also linked to several other informational pieces with public accounts (so don't trust mine, fine -- trust theirs).
> To expose it directly here would likely have little effect generally, but would have an outsized effect personally (damaging trust). If it did have an impact it would likely expose those involved (many careers). I am not in the command chain, I have seen the evidence and it's overwhelming. But I'm not in a position to enact the requisite change.
Sorry, I don’t mean to pick at you but… what?
If you were to anonymously post the name of an academic fraud, you personally would necessarily be found out, and it would ruin multiple careers?
The powers that be know who you are, know of the fraud and its nature and those involved? You’ve been privy to this big juicy secret that’s shared by many, but if its content were to be revealed, you would certainly be the one pointed out?
Are you the only person that could reasonably know about and publicly object to this fraud? If so, is it a necessary function of your social or professional life to cover up this fraud? If so, I’ll go back to “why did you post?” (“Sharing anecdotes” isn’t really an answer to “Why did you share this anecdote?”)
Not sure why people are that hostile. I'm sure that identifying someone also can identify you if the circle around the person is small enough, and if said person is powerful enough, people choosing to believe them over him would end careers, yes. He'd have to say "well, i worked on this particular study with him, and the data was made up vs actually collected."
I think its more "how dare you hint scientists are corrupt!" that kind of drives the outrage.
> This website is pretty easy to post on anonymously.
Correct me if I am wrong, but this website, and the owners are in U.S. of A. If so, an appropriate court order would force the disclosure of logs, IP addresses, accounts, etc. There are plenty of examples where sites had to disclose sufficient details to track an 'anonymous' writer down.
With multi-million (billion?) dollar endowments on the line, I would be worried to presume just a throw away account would work.
VPNs exists and are trivial to use. As is the TOR browser. Yes it's probably compromised already, but the FBI is not going to show their hand chasing a random libel case.
People on HN are so odd sometimes. Ah yes, I must be a liar if I don't want to spill all the details my friends told me in confidence and ruin our relationship.
Sure, sometimes people lie on the internet but this is not an outlandish story.
“Best case” scenario everyone loses their funding, including the colleagues who did report this stuff. The institution, the lab, etc would lose their funding. Those wanted to get PhDs in those labs will lose theirs. It’ll have a serious negative impact on everyone, even those who did the correct thing.
This is why academia is as corrupt as it is. They all evaluate each other’s paper, give each other grants and anyone who exposes anything loses their entire careers.
the two parent comments speak to me (uninvolved) as wholly throwing all people, institutions, grant selection and everything else, under the bus without distinction. The drivers for that are unknowable, but I guarantee that neither these statements, NOR their complete compliment, are Truth. Proof is that without sufficient distinction, nothing claimed is distinguishable enough to even weigh. Add that the selection pressure in many institutions is many orders of magnitude more than ideal, and the consequences of the outcomes, similar.
I don't know, which puts me exactly where I was ten minutes ago. They have no control over the situation, and if they hadn't posted I would know even less, so it's not suppression.
They have complete control over how complicit they are in preserving the coverup of acts they allege to know are definitely true.
> and if they hadn't posted I would know even less
The reality is that right now you know exactly as much as before they posted because what they posted was unsubstantiated. They may as well have said that their name is Princess Peach and that they're pregnant with the mustached child of a certain Italian plumber for all the good it does you.
One of three things must be true:
1) They have real firsthand knowledge that the claims are true and they're actively deciding to protect the identity(ies) of a conspiracy of rampant fraudsters whose actions are so egregious that they tarnish the very essence of the scientific academy itself.
2) They don't have any real knowledge that the claims are true, and the story is rumormongering.
3) I swear I had a third one, but now that I've written those two I can't think of what it was. I'll leave this placeholder here in case it comes to me.
Of course, there are frauds in any industry/profession.
But in my experience (math, science, & engineering), it is actually far less prevalent than in other places.
Forget the overzealous #sciencetwitter people. I have found that academia is one of the rare places where people the absolute top of their field who are often actually modest & aware of their ignorance about most things.
>I have found that academia is one of the rare places where people the absolute top of their field who are often actually modest & aware of their ignorance about most things.
This view doesn't really align with designing public policy - while those people exist - they aren't the problem (or are insofar they aren't voicing their positions loud enough).
Coming from a place of modesty and acknowledging limitations is not the "believe science" movement.
Eric seems lightly inclined to fringe theories and self-importance, but nothing I'd call fraud. Bret has been pushing some pretty unfortunate stuff though, including prophylactic ivermectin as a superior alternative to vaccination:
> “I am unvaccinated, but I am on prophylactic ivermectin,” Weinstein said on his podcast in June. “And the data—shocking as this will be to some people—suggest that prophylactic ivermectin is something like 100% effective at preventing people from contracting COVID when taken properly.”
He wasn't just claiming that ivermectin might have some efficacy against SARS-CoV-2 (possible, though I doubt it), or that the risks of the vaccine were understated to the public (basically true; but it's a great tradeoff for adults, and probably still the right bet for children). Bret was clearly implying that for many people--including himself, and he's not young--the risk/benefit for prophylactic ivermectin was more favorable than for the vaccine. There was no reasonable basis for such a belief, and the harm to those who declined vaccination based on such beliefs has become obvious in the relative death rates.
The first article I've linked above is by Yuri Deigin, who had appeared earlier on Bret's show to discuss the possibility that SARS-CoV-2 arose unnaturally, from an accident in virological research. This was back when that was a conspiracy theory that could get you banned from Facebook, long before mainstream scientists and reporters discussed that as a reasonable (but unproven) hypothesis like now. So I don't think Bret's services as a contrarian are entirely bad, but they're pretty far from entirely good.
What's fraudulent about them is not their papers (there are none of any relevance to speak of) but their character. They are both self proclaimed misunderstood geniuses who have been denied Nobel prizes in spite of their revolutionary discoveries (in 3 different fields, Physics, Economics and Evolutionary Biology). In actuality they are narcissistic master charlatans with delusions of grandeur.
Then the comment by ummonk above is off topic because there is no credible claim of scientific fraud. Lots of people are blowhards with odd opinions. So what?
And the ones that are masquerading as having Nobel worthy research chops to get the audience to believe their gripes about the scientific establishment is on topic enough.
It seems you don't like them for some reason, but complaining about the scientific establishment isn't fraud and has nothing to do with censorship. So what's your point?
One claims to have discovered a Theory Of Everything, putting forth a paper riddled with mathematical errors and with the caveat that it is a "work of entertainment". The other claims to have made a Nobel worthy discovery that revolutionizes evolutionary biology.
I wouldn't use the word 'charlatan' as leniently. Without commenting on the validity of their work, the 'mainstream opinion' about them is most certainly negative. Yet instead of pivoting elsewhere, they stick by their convictions. They might be right or wrong on their opinions, but they are hardly doing it to win any favours. And it's certainly not wrong to stand by something you believe even if the mainstream discredits you; time will tell who was right.
I've got not dog in the fight, but I've listened to some podcasts by Bret Weinstein and Heather Heying, and compared to the absolute nonsense I see on TV today, it is a breath of fresh air. They're reading scientific articles, discussing implications in long form, and have been open and honest about their mistakes.
I have not seen or heard of Eric Weinstein so I can't comment.
I'm not sure what kind of bar you're using to compare your chosen media, but it seems extremely high, and I'd like to know what you consider to be suitably informative.
They really haven't. They continued to double-down on ivermectin and other COVID era flim flam as it came to light more studies were dodgy or outright fraudulent. It's pretend science theatre from a former small university lecturer who managed a couple research papers in 20 years. For example, telling the audience with a straight face that it doesn't matter if the studies going into a metaanalysis are biased because the errors will cancel out.
>There are many examples, at the end of the day... the people in the institutions will protect themselves.
This extends well beyond the science itself, for what it's worth. It's an open secret in every department that professors X, Y and Z have the same level of sexual restraint as Harvey Weinstein. It doesn't stop the "woke" institutions doing everything they can to protect these professors' reputations, though.
> Their lab mates tried to raise concerns and nothing happened. That person graduated, with everyone on the committee knowing the issues.
I understand after some time it would be an embarrassment for the department because they hired, vetted them, etc. But why did that get tolerated initially, was it a case of nepotism? It seems they must have had some kind of a special privilege or status to skate by so easily despite the accusations.
I agree that there is an integrity problem in some areas of research (due to incentives), however the Eric Weinstein reference is laughable. I'd consider him a prime example of pseudo science for dollars / influence with zero credibility. Not saying he can't be on the right side of an issue some times, just that he's a particularly untrustworthy source, and the way he gets to his conclusions is just... wow.
> Most cs work is equally useless. It’s all a farce.
Hey careful there, it must depend on the area because all the type theory and functional programming research comes with and always came with proofs and working code. It couldn't be more rigorous and useful.
it's almost like our fav person Ayn Rand got that right [0]. Wat is pure science? It's little without a consciousness and there are a lot of perverse stimuli. For one, I found the requirement to publish (something "worthwhile", which is subjective) during my PhD so stupid. You can work hard and cleverly in science for 4 years yet only debunk stuff or not even find anything at all (but that by itself should be publishable!) or you can get lucky and publish a lot. All the publish-or-perish pressure does is force subpar stuff into the community. Like you two, I also got a bit disillusioned with non-applied science as we have it.
Edit: I don't want to say (like Ayn Rand, who can be pretty black and white) that it is all bad and we should do away with it, but it's something we should be very aware of and try to build in mechanisms to protect ourselves from these effects.
Probably, I was being sarcastic, if you've been on HN for some time, you know that Ayn is not universally loved here. At times this is ironic because she matches quite well the opinions of people here wrt government involvement in the market, love, and now indeed state sponsored science.
Now think about research on chemicals, everybody has a different source, different quality control (most academic labs do 0 qc on incoming chemicals). I have bought chemicals from major and minor vendors and I could tell you all kind of horror stories... Wrong molecule, inert powders added to increase weight, highly toxic impurities... Now you add that to assays and academics that have been optimized for years to scream "I HAVE A NEW DRUG AGAINST X" anytime they stare too long at the test tube...
This is absolute baloney. I've ordered numerous research grade chemicals from multiple suppliers and not once has any of them been the wrong one nor outside of stated purity grade — and I regularly checked, since it's standard practice. If a solid organic material is in a lower grade of purity it is typically recrystallized.
Now, yes, impurities — even minor ones — can have significant effects. But that tends to be in rare circumstances and chemists are quite aware of the need to check for it where it's most needed such as catalysis research.
No one is going to scream "I have a new drug" for something for which the composition is unclear.
I don't know what world you live in, but it isn't one of a typical North American nor European university research lab.
This was my work to check for quality of chemicals entering the lab. NMR, MS, IR... And over 15 years I have seen dozens and dozens of cases. Now most labs call HPLC with UV sufficient for quality analysis. Lots of things looked "fine" that way that's for sure. Note that I was in the drug discovery world not in the inorganic chemistry world where things are ususally of much better quality.
I'm biased - I'm a researcher at a big research institution and finished my PhD 15+ years ago. Seeing how the sausage is made shouldn't cause one to become less "pro-science". It should give people perspective that the scientific process is a process performed by humans, with all their flaws (ego, ambition, competition, etc). Furthermore, science is all about making hypotheses and seeing how they work out - with the understanding that many are wrong for all kinds of reasons (bad idea, bad motivations to push known bad ideas, etc.).
When I hear people disillusioned with how science works, more often than not I see someone who is forced to see that the mental model they had of how it works doesn't line up with how it actually works. It's messy, and has always been messy. Just read some history books, and you'll see "scientific" ideas in the 19th and early 20th century that persisted for a long time simply because some people disliked specific other people with opposing ideas. You'll see horrible egos, non-scientific arguments to reject ideas, people scheming and trying to undermine each other, and so on. Basically, humans being humans to each other. There never was a time when science was performed in the idealized model that we sometimes wish it was done: it always involves people being people, flaws and all.
Plus, I don't know any working scientist who doesn't assume most scientific discoveries are complete bullshit (or, more likely, they aren't bullshit as much as they ultimately contribute no new knowledge or understanding). We all know how the game works and why people put ideas out there (especially with the increased volume of publications people are expected to have now). What keeps people like me going is that I know most ideas are dumb or flat out garbage, but good ones emerge from that churn too and push things along. I like to think that some of my dumb ideas, when I throw them into that churn, will also occasionally push things along.
Don't get me started in the "believe in the science" line. The fact that we've turned that phrase into a political cudgel is not helping the situation.
I agree. One can be sceptical of the system while still working in the system and trusting that there will be a small number of winners that compensate for the sea of noise.
I feel the HN community might resonate with how startups are seen currently. Most of them fail, some are outright fraud, but a few completely change the game.
The model to assess the scientific enterprise should be similar to a VC. Most will fail but the few big successes will more than compensate for the losses.
Noise is one thing. There are plenty of instances where humans have had to pick out signals from seas of noise.
The anti-pro-sciencers are claiming there is not just noise, but a sea of deliberately fabricated conspiracy theories concocted over many decades with an ultimate end goal of putting tracking chips in them.
GP here - I honestly think we're honestly mostly in agreement!
It's obviously not a completely worthless institution - there have clearly been some impressive discoveries despite all of the bullshit.
I think the main source of distrust comes from the jarring disconnect between the way that the institution is presented to the public vs. the way it actually functions in real life. "Messy" is a beautiful way to describe things from the inside.
Your comment makes it sound like there's an agenda in some entity consciously misrepresenting how acamedia works. But academia is really accessible to public nowadays.
The problem, in my opinion, is how external entities treat science. Some examples:
* Media reporting low grade papers as "Science" which constantly contradict each other. "Scientists found out $something causes cancer", then a year after that "Scientists found out $thatsamething cures cancer
* Pseudo scientific drugs being sold for a health-and-beauty-obsessed general public with little to no effect
* Political think tanks funding flawed research to push their own agenda and heavily echoing those researches in their echelons for political gain
It seems that academia has become a foundation for other entities to cling on for profit, and now the it's the acamedia's reputation that has been shattered.
>Your comment makes it sound like there's an agenda in some entity consciously misrepresenting how acamedia works.
There absolutely is.
Scientists and universities are constantly declaring themselves to be bastions of reason and enlightenment, and their research that they themselves know is shit to be of great significance and rigor.
I'm not even talking about the outright corruption that you mentioned - just the more mundane things like researchers publishing papers with completely useless/misleading information (e.g. testing a ML algorithm on a single dataset and declaring it as a breakthrough in neurosignal decoding) just to get those updoots from your mates.
Even when something in academia doesn't have an agenda, it will be interpreted by everyone else as if it does.
Despite all the brilliant minds and appearance of professionalism and pride and campus pageantry...
it's easily a much more insecure, drama-filled, immature, and petty social dynamic and environment than most high schools.
That seems imo to be due to a combination of academia itself and a govt (or govt style) administration apparatus, where everyone is constantly trying convince everyone else of the importance of what it is they do.
The problem here is that your initial post, critical of how academia is presented, reads as very black and white to me.
I get the feeling that you claim that it is all fraud and lies, and that those who disagree can see no wrong with it at all.
Then someone responds with a nuanced reply, something that I, as someone who lives in a university town in Sweden, have the impression that almost everyone in academia actually agrees with.
And then it turns out that you also mostly agree. So, now I'm asking myself (and you): Why did I read that first post of yours as so hostile? As someone who was actively trying to create mistrust?
Is it just because how the debate climate is on the internet, or should you perhaps consider trying to be a bit more nuanced when communicating on the internet?
>Why did I read that first post of yours as so hostile? As someone who was actively trying to create mistrust?
I think this happens all the time - popular opinion swings far in one direction, criticisms then go in the opposite direction as disillusionment and problems get exemplified, again too far, back and forth, rediscovering the lessons of the past and settling down.
Thanks so much for the realistic take. I'm in the software industry with a bachelor's degree and currently considering going back to school because I want to try research. Your comment is a very refreshing take. HN is absolutely loaded with anti-academia posts that are rather demotivating to read.
So in science the game is never be caught lying, and they usually define bullshit as lying. I’m with you in that getting payed to write two six page papers that contain absolutely no new information per year is bullshit. Not because what’s on the papers is a lie, but because pretending society needs to invest in you so that in the future we are all more productive or happy or wise is a lie. Science itself is not bullshit and most papers are not bullshit, what’s bullshit is that we need to have so many useless science just in the from within it some useful discoveries will emerge, which is a strong underlying premise under which a lot of scientists get payed.
As a person working in science (long past PhD), I disagree.
There are tons of problem in science: grant system, pressure to publish etc. Also everyone in science is still a person with standard people problems: ego, desire to be right, wanting recognition etc. Also some fields had more issues because of poor statistical methods etc.
But the self-correcting nature of science is still its biggest redeeming feature. And therefore unless I can figure out things by myself, I'll trust scientific consensus. I may not trust individual papers, as people make mistakes, may have an axe to grind, but I'll believe the consensus. For sure occasionally even scientific consensus will be wrong.
Science does eventually self-correct, but unfortunately it takes far too long to do so.
One area I've studied pretty extensively is the history of cancer treatment. In the long story of the history of cancer treatment, it is absolutely scandalous how often the scientific consensus was wrong and persisted for years in spite of the evidence. For example, the radical mastectomy for the treatment of breast cancer continued to be used for many years, leaving many women disfigured, in spite of wide evidence that it did not produce better outcomes vs more restrained breast tissue removal.
In the history of science, many of these kinds of bad ideas have persisted simply due to deference/seniority - the incentives are all stacked towards paying your dues and not challenging the status quo and absolutely not towards being right/following the actual scientific method. There is a reason the saying "Science advances one funeral at a time" exists - as Max Planck noted: "a new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents die, and a new generation grows up that is familiar with it.”
I saw this first hand walking with my wife through two years of intensive cancer treatment. It seemed impossible to break through the ‘not standard of care’ wall to even incorporate low risk adjuvant therapies.
Overall it just felt like she was a hot potato and nobody wanted to put their name on *anything* outside of protocol. Even blood work. She was treated at the James Cancer Center in Ohio and we got second opinions from Cleveland Clinic and MD Anderson in New York. These are all fairly well regarded institutions in cancer treatment. I was expecting strong opinions and got hand waving and reluctance to interfere with any treatment selected by her primary oncologist. In the process of all of this i read hundreds of studies and research papers, spoke with numerous PIs, trial coordinators and industry reps. I couldn’t get any traction for anything and came away feeling a bit hopeless.
After it was all over i started making public offers of $25k as a starting point to just review her case from end to end to assess the quality of her care and determine if anything could be learned from it. The only takers I got for that were lawyers who were hoping to twist it into malpractice case, which I wasn’t interested in.
The experience left me extremely bitter about the current state of healthcare. After a while i was able to develop some empathy for the providers. They’re trapped in a system that mortgages their future with student loans and directly threatens their ability to cover with litigation and insurance. They have to stay on the rails or risk financial ruin.
This seems to be a particularly salient issue in the area of medicine, since many practice but may not actually “do science” unless they’re affiliated with a university or research hospital.
Most doctors are more like engineers than physicists.
They're more like airplane mechanics than engineers. Few doctors design treatments or procedures but they're very careful about performing them according to the right practices.
I just got the horrifying image of a lead engineer who refuses to move on from Java 9 because "it's all you need," but they're working on human bodies. Yeesh.
I just got the horrifying image of a lead engineer who jumps onto a newest JS framework of the day because "it's exciting," but they're working on human bodies. Yeesh.
But the difference came about only due to relentless informed written criticism of scientists, many of whom happily described their critics as spreading misinformation (or similar words from their era).
"Even individual fanatic scientific advocates of the Einsteinian theory seem to have finally abandoned their tactic of cutting off any discussion about it with the threat that every criticism, even the most moderate and scrupulous ones, must be discredited as an obvious effluence of stupidity and malice"
People often present science as some sort of free-floating edifice that "self corrects" through mysterious mechanisms. It doesn't. Scientists with wrong ideas have to be explicitly corrected by other people, some of whom will be random Swiss patent clerks and other outsiders. Therefore you cannot have science without free speech, because otherwise there's no way for bad ideas to be corrected. It's as simple as that. And yet, academic "scientists" are often at the forefront of demanding it be shut down.
Sounds like someone pushing quackery, honestly. "Science-based medicine is flawed and sometimes make you feel bad! Obviously, you need quackery!"
This also paints a too-simple picture, since there are obvious things like the promise of quick recovery if you do simple stuff (cancer being a big one) and the alternative medicines seeming cheaper than regular medicine (It may be the only treatment you think you can afford in the US). Not to mention the dismal state of "Health classes" - I'm a little over 40, and those classes mostly just told you about the parts of the body, the food pyramid, and that sex was bad and would probably kill you through disease unless you only had sex with another virgin.
I don't think most people really try to argue against "science" as is, or even the scientific process in the large scale. The gripe we have is against people who first do bad science (or popularize them), refuse to admit they're wrong, then patronizingly tell the public to trust them anyways, and still have the audacity to reassure us that "it will all work out (50 years later)".
What’s weird is it makes sense to trust scientific consensus, even though scientific consensus is also kind of territorial and protective.
It always fights against anything new, and will try to be opposed to anything against the status quo. It is so confident it is right it actually feels like it is doing the right thing by trying to suppress ideas opposed to the establishment.
Kind of like Galileo I guess, and the religious leaders of hus day are like the intellectual elitists of our day.
Thinking about the copernican revolution it made sense the establishment was opposed to it, not because it wasn’t factually correct, but it was like knowledge people aren’t ready for. And it was too unpredictable what it would do if the idea became dominant.
It may be useful to separate science itself (with its self-correcting nature and such) from academia and the community/economics of scientists and research. Furthermore, if the incentives for publishing research are wrongly aligned, can you really trust published papers in mass?
Scientific consensus can be a hard thing to define sometimes. Climate change is an example where there are enough studies out there that a meta-study can prove consensus, but what about something more recent and difficult to measure, like the effectiveness of covid vaccines/measures?
You open the newspaper to an article on some subject you know well. In Murray’s case, physics. In mine, show business. You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward—reversing cause and effect. I call these the “wet streets cause rain” stories. Paper’s full of them.
In any case, you read with exasperation or amusement the multiple errors in a story, and then turn the page to national or international affairs, and read as if the rest of the newspaper was somehow more accurate about Palestine than the baloney you just read. You turn the page, and forget what you know.
I’ve read State of Fear, and I agree that Crichton isn’t a great source for scientific facts.
But this quote isn’t really a criticism of science. It’s a criticism of journalism.
I think most people here on HN agree with his premise regarding the quality of science journalism. I’m not so sure about his conclusion that all journalism should be treated with the same suspicion.
His books all have a scientist doing evil because of their lust for grant money.
"State of Fear" is a polemic against climate science.
If all you read was the comment I responded to, then it's no surprise that you don't think I provided enough information to criticize his books. Read his books, then read my criticism.
>His books all have a scientist doing evil because of their lust for grant money.
How unrealistic. Scientists like Peter Daszak and the EcoHealth Alliance would never do anything evil or untoward, such as using grant money to perform dangerous gain-of-function research on coronaviruses, demonstrably lying to the NIH about the scope of this research, and then misleading the public on the COVID-19 lab-origin theory while failing to disclose their financial and professional ties to the lab in question [0][1][2].
The original comment you made, which was downvoted (but I upvoted it) conveyed to me an emotion of angry refutation. (Which I'm not judging, so don't start about tone policing)
You seemed to be saying "don't pay attention to his quote because he's clueless about climate change", but the quote is about people who aren't, say, climate scientists, being gullible about the topic.
It's a self-referential quote. If you refuted it, does that mean that non climate scientists can judge climate science?
It seemed like an amusing paradox, and I guess I'm curious about your motivation.
My motivation is that I'm a scientist (before and after a long Silicon Valley career) who is annoyed by writers who write a bunch of novels where the villains are scientists corrupted by their lust for grant money. It's a trite complaint that is used in politics to claim 100% of scientists are biased, and plays a big role in the pushback against environmental and climate science.
Not op, but I think that the post that you replied to has a valid point.
Sure, the quote in itself is great, but the post that you replied to is pointing out that the person who made that quote is himself doing the same thing as those journalists that he is criticizing.
When you say you are no longer highly 'pro science' would you say that you would rather people take advice from scientists or from people who have a background of "Ultimate Fighting Championship color commentator, comedian, actor, and former television presenter". Because that's the level we're at at the moment.
The current climate we're in means that consensus science from 50+ years of research and medical practice around the world is being questioned. Not some BS CS paper on a performance improvement in something that will be read and forgotten about, or that some new medication/fruit/treatment has some miracle cure that others can't duplicate.
We're in a world were people think that Bill Gates is inserting microchips into people and that Covid is a hoax by the world government and no one has died. Academia may be broken - but don't loose faith in science.
>When you say you are no longer highly 'pro science' would you say that you would rather people take advice from scientists or from people who have a background of "Ultimate Fighting Championship color commentator, comedian, actor, and former television presenter".
I would prefer that the general public take advice from scientific researchers with a grain of salt, with an understanding that the system is far from perfect and much of what is published either contains methodological flaws that lead to false conclusions or cannot be reproduced at all.
I also think a little bit of common sense would go a long way.
I too have worked in academia, and in partnership with them over 3 deacdes, so seen how the sausage is made. So I understand and recognise some of your points. Still we can and should be pro-science; but recognise that science in academia is extremely political, and increasingly bending to marketing pressures and industry influence.
A big problem, as I see it, is the unwillngness of academia to engage in debate with the public. Whilst academics nowadays are supported and encouraged to engage in "outreach"; the nature of that is almost always in broadcast mode.
In the age of social media, science needs to learn how to communicate, and engage with non-tenured people asking good, evidence based, questions. Sure they get bombarded with abuse and crackpots, but that should not mean ignoring absolutely everybody that approaches them with reason and civility; as so often happens.
Trust is breaking down in society. Is there any wonder, when trust grows from interaction?
> Sure they get bombarded with abuse and crackpots, but that should not mean ignoring absolutely everybody that approaches them with reason and civility; as so often happens.
I think it is a really hard problem. Figuring it if someone is arguing in good faith is hard. And engaging with someone arguing from bad faith is really draining. Besides the difficulty, people worry that engaging with some people will draw even more responses, many of them being in bad faith. Certainly only responding partially often leads to "yeah you are responding to them, but what about X".
I think in the end, wading through all the abuse is just so painful that many people decide to ignore input. And doing anything that might cause more abuse to flood in is reflexively avoided, since that abuse actually hurts.
Solving this is going to require specialization. Because being able to wade through the abuse is not something everyone can do. It will also require proper incentives to actually do this, because the people who can handle the abuse aren't going to do so out of charity.
Isn't the simple solution to just not engage with comments on social media? Honestly, that's where the problem is aka trolls and partisan bots. Have face to face conversations between scientists and skilled interviewers, the scientist supplies the answers and the interviewer generates the questions and keeps the conversation moving along and interesting for the viewer. Publish the content on social media since that is really the only thing it is good for but don't read the comments. We have specialists in this area already, they're called journalists. All they need to do is make a phone call, set up the interview and record it.
That all sounds disturbingly like you kinda are okay with Joe Rogan and Bret Weinstein spreading nonsense about how vaccines are killing people and how ivermectin is better than vaccination.
Joe Rogan doesn't spread anything. He interviews experts in the field and listens to what they have to say. Currently everyone is angry because he interviewed Robert Malone, the inventor of mRNA vaccines. You can read about Robert Malone's contributions here:
"Dr. Robert Malone is the inventor of the nine original mRNA vaccine patents, which were originally filed in 1989 (including both the idea of mRNA vaccines and the original proof of principle experiments) and RNA transfection. Dr. Malone, has close to 100 peer-reviewed publications which have been cited over 12,000 times. Since January 2020, Dr. Malone has been leading a large team focused on clinical research design, drug development, computer modeling and mechanisms of action of repurposed drugs for the treatment of COVID-19. Dr. Malone is the Medical Director of The Unity Project, a group of 300 organizations across the US standing against mandated COVID vaccines for children. He is also the President of the Global Covid Summit, an organization of over 16,000 doctors and scientists committed to speaking truth to power about COVID pandemic research and treatment."
Do you really think it is unreasonable to listen to what someone with this kind of a background has to say about mRNA vaccines?
Malone worked on that stuff almost 30 years ago, and hasn't been involved in any way, shape, or form with the vaccines that exist today. Moderna's platform has a decade of work behind it and he had nothing to do with it, and the studies that show that it's safe and effective have nothing to do with him, nor does he have any particular knowledge about them that a smart grad student in a lab wouldn't have.
So no, I don't think his random commentary is particularly relevant or reasonable to listen to, any more than Jaron Lanier's haterade @ the tech industry is relevant just because he was working on shitty VR in 1990. Some people are dinosaurs and just need to be ignored, it's not unusual for people to grow to hate the things that they worked on when they were young and get increasingly unhinged arguing against them.
I don't think Rogan should be censored, for sure, 100% and without any reservations - free speech should be absolute, I'm old-school in that respect. But I absolutely think anyone paying attention to this crap should be socially scorned as a dumbass, and TBH I can't get my chuff up too far over people trying to convince Spotify to boot Rogan over this, free markets are also sacrosanct and that's what this is (even if I also would vastly prefer that Spotify holds to a hardline free-speech ethos and leaves him up).
Joe Rogan is a both-sides-ist. He shows people who speak the truth, and people who speak lies, and then claims to be impartial because he showed both sides.
Imagine this in any other field. It is not exactly reasonable to separately bring both a Nazi and a non-Nazi and claim to be impartial. In that scenario, most likely the normal person would sound relatively ambivalent, while the Nazi would be extremely agitated and emphatic that urgent action must be taken.
Several things that really truck me with this comment:
> consensus science from 50+ years of research and medical practice [...] being questioned
This is precisely what scientists should have been doing! I'm afraid having your work done for you by unqualified amateurs is a kind of punishment for skimping on it for too long.
> a background of "Ultimate Fighting Championship color commentator, comedian, actor, and former television presenter"
This is essentially a class-ist argument: Someone who tells dirty jokes on netflix couldn't possibly, by the essence of their being, decide for themselves which scientist to talk to. What's particularly striking is that this comes after the author seemingly accepts the parent's premise that academia is borken. What I'm hearing is "Yeah, we haven't earned your trust, but at least we're not those people". This is not a winning argument, and certainly not a scientific one.
> We're in a world were people think that Bill Gates is inserting microchips into people and that Covid is a hoax by the world government and no one has died
This too seems heavily influnced by class. When people believe "Bill Gates is inserting microchips" they are factually wrong but tentatively right. They are powerless in politics, while oligarchs, including foreign ones, have outsized influence. Wild conspiracies are how these feelings find expression in those that have no other way of doing so. By taking everything literal ("no, Bill Gates is not literally inserting microchips"), their enemies can avoid dealing with the issues a more charitable interpretation would be getting at.
> don't loose faith in science
To the extent that science is used as the underwriter of various political projects, I think people are perfectly fine losing faith in it. 5 decades of "belief" in science hasn't done much good, so what's the point? On the other hand, I'm not worried that people will stop using material science... By and large mistrust is very well placed.
> By taking everything literal ("no, Bill Gates is not literally inserting microchips"), their enemies can avoid dealing with the issues a more charitable interpretation would be getting at.
This is so on the mark - too often we forget to focus on what someone is _really_ trying to communicate, and take their words at "face value" - which is not communication at all.
> This is precisely what scientists should have been doing! I'm afraid having your work done for you by unqualified amateurs is a kind of punishment for skimping on it for too long.
How else do you think scientists reached consensus, exactly? Maybe the problem lies with the unqualified amateurs, who are, well unqualified to do that?
Scientific consensus is reached gradually over the course of decades, after a theory stood the test of many many challenges. No scientific theory is absolute and final and challenges are welcome, but they should be scientific, i.e. based on real data and statistically gounded analysis. Yes, it's hard to change consensus, not only in science but in any kind of community. This does not mean everybody in that community is corrupt. "Extraordinary claims require extraordinary evidence". If changing a consensus was easy, it would not be a real consensus.
>> a background of "Ultimate Fighting Championship color commentator, comedian, actor, and former television presenter"
>
>This is essentially a class-ist argument: Someone who tells dirty jokes on netflix couldn't possibly, by the essence of their being, decide for themselves which scientist to talk to.
No it's not. Getting all background knowledge and staying up to date with all pubblications in a given scientific field is very difficult and time-consuming. Expertise does exist, and takes year to build. Somebody who tells dirty jokes on netflix likely does not have the time or energy do to that. Maybe, but quite unlikely.
> To the extent that science is used as the underwriter of various political projects, I think people are perfectly fine losing faith in it.
What is the alternative? If no policy should be based on science, then what? Tea leaves? Crystal balls? Shamans? Prayers? Goat sacrifices?
> 5 decades of "belief" in science hasn't done much good.
You really need to elaborate here.
Please, please, please. Do not confuse science (the process of discovery) with scientific institutions (the groups of humans with all their defects).
> What is the alternative? If no policy should be based on science, then what? Tea leaves? Crystal balls? Shamans? Prayers? Goat sacrifices?
I'm afraid you're lacking imagination. Also, the irony of these being historical examples rather than made up seems to be lost on you. Humans are strange.
On a more serious note, if you were interested in what a modern society looks like that is no longer guided by science, you could study Russia. It seems a cynical blend of religion, nationalism, and imperialism is what it is. I'm not advocating for this, I'm describing what it looks like.
>> 5 decades of "belief" in science hasn't done much good.
> You really need to elaborate here.
I specifically prefaced it with "as underwriter of political projects". Over that time, the #1 influence on policy has been economics, and besides covid all of the most pressing issues today are economic: Stagnation, deindustrialization, inequaliy, and inflation. People are asking, if policy over that period was built on such solid scientific foundation, how come we've ended up here?
>> consensus science from 50+ years of research and medical practice
>This is precisely what scientists should have been doing!
My reference was to the years since we've virtually eliminated the deaths from polio, TB, and measles and other diseases in countries that have used vaccines against them. There have been studies that have shown that some vaccines cause side effects in some people, but none have shown to be comparable to the millions of lives that have been saved from debilitating diseases.
> 5 decades of "belief" in science hasn't done much good.
Says the person writing a comment on HN through a network of hardware and software unimaginable 50 years ago that was at least partly made possible by scientists and academic researchers making real breakthroughs. My comments were to the poster who thought scientific research was a polished turd because of his experience in one part of academia
Just this weekend I was reading that more than 60% of the research into cholesterol is funded by the egg industry. Research funded by industry overwhelmingly shows things in as positive light for industry whether it does or not.
This kind of example is common. It tells me its worth questioning research and looking deeper at what ever is released. When someone questions research to look at the merits of what they say.
Exactly. The guy who is an "Ultimate Fighting Championship color commentator, comedian, actor, and former television presenter" doesn't have a conflict of interest, which makes him less biased and more believable.
BIG conflict of interest… if someone goes on his show stating something like “brushing actually destroys your teeth, you should take X supplement which I developed and read my book” he gets money and viewers interested in this “no brushing” thing. If the conclusion is that you should brush and use toothpaste he doesn’t have a podcast.
He’s in the industry of going against “mainstream [ie scientific] knowledge”… which provides him and endless stream of salesmen who cherry picked scientific papers that support their schtick… and a lot of interested listeners who get an excuse for everything they don’t feel like doing (“oh, this guy on Rogan said brushing destroys your teeth, so I guess I’m better off leaving them alone”)
> BIG conflict of interest… if someone goes on his show stating something like “brushing actually destroys your teeth, you should take X supplement which I developed and read my book” he gets money and viewers interested in this “no brushing” thing. If the conclusion is that you should brush and use toothpaste he doesn’t have a podcast.
This seems to be equally valid for basically any media (using brush and toothpaste is not news). Would you say that all media is in BIG conflict of interest? Or how is Joe Rogan fundamentally different?
It's different because that's the niche he's focusing on. He's not the only one... Oprah? (to a less sensationalistic extent), Gwyneth Paltrow? Malcom Gladwell also had huge success with books that cherry-pick scientific publications to "prove" things that go against conventional notions in various fields.
We know that traditional media has analogous biases towards celebrity scandals, and towards FUD and polarization in politics, and that bottom feeder broadcasters with no ethics thrive on those.
... He does it with lifestyle / well-being specifically, inviting people that peddle diets, supplements, etc.
That's not fair to say. The guy who is "Ultimate Fighting Championship color commentator, comedian, actor, and former television presenter" has a massive conflict of interest, namely between generating viewership and presenting factual information. Everything he presents is a choice on a gradient between those two.
These snake oil salesmen have quickly cottoned on to the fact that they can take disenfranchised and usually uneducated groups and regurgitate them their own opinions with the air of authority, thus gaining an almost fanatically loyal following.
It's the new age televangelists, but instead of preaching salvation for tithe, they preach actually dangerous drivel that gets people killed. It feels like a low point in human evolution, where our tools enable sociopaths to mislead clueless masses on an industrial scale.
And do you know what is the worst aspect of this? That we can't ban them in good faith, because banning information is a slippery slope one way street that we mustn't take a single step on. The only way out of this quagmire is education, but it's a long slog that will pay off in decades time and only if we make a great effort now. It's literally the tree we have to plant and yet our current generation won't get to enjoy it's shade.
Less biased I agree with. To be more believable I need to look at the merits and details. Sometimes he says things that I could argue are true. Other times I can make an agreement he shared complete hogwash.
Being less biased doesn’t make one more true. Both can be wrong
I'm a fan of Joe Rogan, but the "doesn't have a conflict of interest" argument doesn't seem correct to me. The point of Joe's podcast is that his biases and motivations are transparent, not that they don't exist. Joe is interested, like many people, in contrarians. It's a heuristic that leads him to interesting conversations, and those kinds of interesting conversations are why the podcast is so popular. Oftentimes people who seem one-dimensional in media coverage are shown to be much more than meets the eye when placed in a long-form non-confrontational setting. There is no guarantee of factualness or good faith in any conversation on the podcast, and in many cases he will bring in verifiably crazy people just to see what they're really like -- these are some of the most popular episodes, by the way.
Even if he isn't in a conflict of interest, that doesn't mean he has any kind of qualification on any scientific matter. Scientific debate should continue in the scientific community, not on podcasts and social media.
1) That is not the choice to be made. I don't think there are more than a small handful of scientists in the debate. Scientists tend to be much more circumspect about things, and don't come to consensus quickly enough to agree on the measures that were rammed through.
2) The scientists who are involved are unusually politically motivated. Most of the scientists I can name off the top of my head are in government (the likes of Fauci in the States or Kerry Chant of Australia. These people are in a position of arguing that their opinion should be given emergency power to decide ... most aspects of the social sphere. This is an excellent time to be looking beyond credentialism to actual arguments and trade offs. Without the "noble lies", please, lets make policy decisions based on facts.
3) I still haven't found anyone who can argue why the vaccine mandates aren't human rights abuses. We're talking the really basic stuff like right to work, right to assemble, right to religion, etc. I don't care who we listen to as long as they are arguing on the side of those basic freedoms. If Joe Rogan is arguing for those things, I'd rather people listened to him. Those concepts have a better track record than technocrats. We can worry about run-of-the-mill stupidity after we've dealt with the real risks here.
So is the choice between “covid is a hoax and Bill Gate’s microships” and “follow the official guideline whatever it says”? What happens to opinions in the middle, “like vaccinates people at risk, not those who already had covid”? Or “covid may well have leaked from a lab”? In which of those categories do they fall?
The former is universally an excuse to not get vaccinated. If we had very reliable vaccines it would be reprehensible for so many people to refuse to do their part to immediately end the pandemic, but given current data it is not so clear.
The latter is pretty much moot.
You do have to understand that when people say "I got banned because I said COVID may have leaked from a lab" they don't actually mean "I got banned because I said COVID may have leaked from a lab". It's significantly more likely that they got banned because they said COVID definitely leaked from a lab funded personally by Dr Anthony Fauci to create bio-weapons against America to aid the spread of communism with Chinese characteristics because he is secretly an illegal immigrant from Kenya.
>> When you say you are no longer highly 'pro science' would you say that you would rather people take advice from scientists or from people who have a background of "Ultimate Fighting Championship color commentator, comedian, actor, and former television presenter". Because that's the level we're at at the moment.
That's a strawman. He's interviewing doctors, scientists etc. He's not putting forward his own views, apart from when he recounts his own covid experience.
It's like saying the left gets all their news from John Oliver.
> We're in a world were people think that Bill Gates is inserting microchips into people and that Covid is a hoax by the world government and no one has died. Academia may be broken - but don't loose [sic] faith in science.
Human bar code tattoos financed by the Gates Foundation. Opponents are discredited because they get an insignificant detail wrong.
We see the same thing in the reporting of the Canadian trucker protests. There are 10,000+ peacefully protesting people, the media find three with a swastika flag (which could even be a literal false flag operation), and reports "right wing protests".
They also report the "desecration of an unknown soldier's memorial", which was a single woman basically walking on the monument. Compare that with "mostly peaceful" BLM protests, where whole cities were burning and monuments were actually destroyed.
> The current climate we're in means that consensus science from 50+ years of research and medical practice around the world is being questioned.
(A) In some aspects, 50+ years of accepted medical science was wrong. "Droplet dogma" [1,2] was wrong. Now we know that respiratory viruses spread by airborne aerosols [3].
The epistemological tradition of modern medical science seems to be almost solely based on randomized controlled trials (RCTs). They think everything should be studied and verified the same way that new drugs are studied. We use a lot of physical safety measures, like seat belts and airbags in cars, hard hats in construction sites, etc. because they make physical sense, and their protective properties have been measured in engineering laboratory studies. Like using crash test dummies when crash testing cars. Engineering and physical sciences have methods to achieve and test knowledge, that are not an RCT with real people.
Now the topical topic is face masks. Engineers know respirator masks can filter over 95% of fine particles, but a sizeable part of the medical science still doesn't believe in face masks, due to lack of decisive studies using an RCT-based setup.
(B) But then we also have a surprisingly strong anti-vaccine movement. In this case, the consensus of medical science is correct, and vaccines work.
So we have A, where medical science was wrong and a bit stubborn to correct itself. Science as whole was not wrong: Engineers and physicists worked out the details of aerosol spread of the virus and the filtering properties of face masks. And then we have B, a genuine anti-science movement.
But what to do? If we didn't question the established dogma as used by the authorities, then we would still believe in the droplet dogma, and we would still think face masks are useless. But if we allow the questioning of the established dogma when the dogma is indeed wrong, the how can the general public know when the old dogmas are questioned for good scientific reasons (A), and when not (B)?
We're also living in the world where certain governments are mandating vaccines that pose 50% higher excess risk of myocarditis compared to an infection[1] for those aged under 40, according to a peer-reviewed, UK population-level study.
Or shutdowns of discussion that some health authorities, like Sweden, have (IMO correctly) judged that there is no clear benefit to vaccinating kids under 12 against COVID [2].
The issue I see is less on the fringe 'microchips' but more around the complete shutdown of discussion of complex topics are not binary.
You should also take all the benefits and complications into account when looking at vaccines. The complication rate of infection for those under 40 is far from zero and assumed to be significantly higher than the complication rate of the vaccines. Discussion is good but your argument sounds less like a discussion and more like a statement.
Your first study does not show what you claim it does, or at least it is a gross simplification.
These statistics are at the 10 per million scale i.e 0.001%, which is smaller than fatality rates for unvaccinated under-40s.
That is to say that there is a selection bias for people who did not die, and this is a significant oversight in your interpretation.
The argument he/she's making is that specific findings are being suppressed, so nobody has a full and accurate picture. Thus you cannot refute it with a statistical argument that assumes you do have the full picture.
That is the argument you are making, and I struggle to see in what world being published in Nature is "being suppressed".
They make the claim "vaccines that pose 50% higher excess risk of myocarditis compared to an infection", and this is a misleading claim.
The risk to an individual of taking the vaccine is more sensibly measured against what happens to somebody who isn't vaccinated, not what happens to somebody who isn't vaccinated and also didn't die.
This is like comparing injuries in people who survive jumping out of a plane with and without a parachute. How important is it if people without a parachute break their arms less often?
Their study isn't an attempt to answer the holistic question of whether vaccines are saving lives or ending them in aggregate, and they never claimed it was, so I don't see what's misleading about it. The claim dannyw is making is about the studies and discussion that isn't being published in Nature, or at all.
Your counter-claim is that this alternative question is what they should have been discussing, and if they had been, it'd be misleading to focus on the risk of myocarditis alone. Which is correct, but then they'd also have to take into account injuries and deaths from other non-myocarditis vaccine side effects, the costs of medical care not given due to the spending of resources on vaccines instead, QALYs and so on. But the sort of institutions that fund such research don't want to know about vaccine downsides, so don't fund any research into it, and moreover expend considerable effort to suppress whatever little research does get done.
> We're also living in the world where certain governments are mandating vaccines that pose 50% higher excess risk of myocarditis compared to an infection[1].
If everybody else takes a vaccine it is almost always far safer for you not to take it. Vaccines help society by stopping illnesses spreading, they just need enough people to collectively accept the personal risk for the good of everyone else - the same was true for Polio, TB, measles.
It's when you get immune to getting infected by a virus - antibodies neutralize the virus particles before they can do any harm or meaningfully replicate. All current (and probably future) SARS-CoV-2 vaccines don't do this. Through a combination of several vaccinations and infections the hope is that you get close to sterilizing immunity.
Funny thing about that is he always says that he is the only one speaking up because everyone else has incentives (gets payed) to shut-up. Imagine that, a pandemic breaks out, you invent the cure and get nothing in return. I would want to burn down the world.
It says he discovered a way to put mRNA into human cells. We don't even know that, really - we know he ran an experiment doing so.
It does not say he knows anything about the things he talked about on JRE: hydroxychloroquine or ivermectin, or anything about the specific mRNA being delivered to fight COVID (unless you are suggesting that ALL mRNA insertions kill people), or anything about immunity at all, or anything about Israel/Palestine, or anything about epidemics, or anything about monoclonal antibodies, or anything about mass formation psychosis (except to the extent he is trying to make it happen).
I have a proposal to make things easier: why don't you and your friends give us a list of people allowed to talk about ivermectine, or mRNA, or Israel/Palestine, so we can more easily put them all on one channel that will give us the Truth and make it easier to protect us from all of the other articles, podcasts, etc ? I've seen some regimes do that and it works much better.
Is this comment satire? Robert Malone is the person Joe Rogan interviewed, and that everyone is up at arms about. He is literally the first name that appears in the first sentence of that article you linked.
"We're in a world were people think that Bill Gates is inserting microchips into people"
Well, as far as I remember - the root of that conspiracy has a true base - that Bill Gates really talked about the possibility of a health chip injected into peoples, as a vaccination certificate. So no chip coming with the vaccination, but a digital health certificate under the skin. But now it is framed, as if he never even spoke about chipping people and I cannot even find the original quote on reddit anymore.
"The current climate we're in means that consensus science from 50+ years of research and medical practice around the world is being questioned."
And too many things are getting thrown into the same bucket. Because the mRNA vaccinations are not 50 years old, but brand new. I can very much understand, being sceptical of a cutting edge proprietary technology, to be injected into my system. And I cannot verify the build process.
I have to trust government inspectors to do that.
And as a matter of fact, one of the scientists involved with the original mRNA technology Dr.Malone advises strongly against it.
And the process of getting the data for the safety, is not perfect either, as a whistleblower showed:
Now sure - when reading a bit about Malone, it seems he left the realm of science and entered crackpot area some time ago and the data scandal is not so bad as it sounds - but they are real events nonetheless.
Also despite it is being repeated as a mantra that the vaccines are totally safe - there are strong indications, that people died because of the vaccinations. Not because the vaccines are a poison, developed by shape shifting reptiles, but because some production facilites messed up and some production batches got contaminated with various rubbish, which can happen in the real world when people are in a hurry.
So there is a chance, a vaccination does not help, but harm you as a individual. This chance might be very, very small and statistical speaking it probably still makes sense to get the shot. But the possibility exists. And is not really communicated this way, probably out of the fear that since people do not know statistics - they misinterpret.
But this is what fuels conspiracy theories. I mean, the flat earthers are out of the realm of reason anyway - but not everyone who is sceptical of the vaccination and the official data about them, is a crackpot tinfoil hat nuthead. Or am I?
Well, I know that I got angry beeing told by my doctor, that my vaccination (Astrazeneca) is totally safe - and only learned about the contaminations by chance the next day. Something the doctor should have known. So this was bullshitting me, which I do not like. Now I think I do understand the reasons for this bullshiting - because of negative placebo effect. Meaning if I would believe the vaccination is dangerous for me - it actually might be. I suspect many of the reported heart attacks as side effect, are showing just that. People got so scared and stressed that they developed real symptoms. While people being calm about it, have a better of not getting bad side effects.
So this is the problem as I see it. There are people who can handle real data and people who cannot. But we are all getting treated the same idiots. (And I believe people will never learn to deal with real data, when they are always only getting comfortable lies).
"So shut up and take the shot. The one we prepared for you."
I understand people getting upset with that as I do not like being treated as a sheep either. I am not a medical researcher, but I can read papers. And I would like to choose for myself, of whether I trust Moderna, Biontech, Astrazeneca, Sputnik or Sinopharm. But I do not get even that choice. For some reason BBIBP-CorV, the one from Sinopharm, which uses a very safe old school vaccination principal, of deactivating real virus - is not approved in EU, despite being in use worldwide and approved by the WHO. People being vaccinated with it, are recognized as unvaccinated here. Why is that? Is there some data, which is not shared? It is hard to tell for anyone not deeply with it.
I don't think many people outside of academia truly understand how much scientific research has degraded over the years.
There are an extremely small number of fields in which the overall quality is still quite high (mathematics, etc.) but overwhelmingly the social sciences, medical science, etc. are wastelands of p-hacked, low-N, biased, poorly designed studies that can't be replicated (not to mention the outright frauds and absolutely rampant plagiarism).
Every intelligent person should be deeply, deeply skeptical of papers published in particular fields over the last ~20 years.
> I don't think many people outside of academia truly understand how much scientific research has degraded over the years.
150 years ago we had phlogiston alongside Maxwell on electromagnetism. Science is always a mixture of more and less wrong stuff. It's people doing work, for good reasons and bad reasons. Some of them are crazy, some are corrupt, and many are doing their best in good faith.
Unless you think we had a special good period in, say, the twentieth century that we've retreated from. It did seem to be an acceleration.
But there's at least some pretty amazing biotech going on this century.
I think most every activity sector has the phenomenon that 80-90% of the people/companies in it are a pointless waste of time and money. It's the price we pay for the 10-20% treasure.
> 150 years ago we had phlogiston alongside Maxwell on electromagnetism.
First, phlogiston theory was considered obsolete by the late 1700s, so you're off by a couple of decades.
Second, that's not even remotely comparable. While phlogiston theory is incorrect, at the time it was first proposed it seemed as good a guess as any other, and remained a viable explanation for how combustion worked until experiments proved it wrong. What's happening today is that respected researchers at reputable institutions publish results that they know are wrong or statistically meaningless, in order to game the academic system towards awarding them greater respect and influence. The problem is fraud, not ignorance.
What's the basis to believe that this is worse now than other times in history, or on average?
I strongly suspect there's a survival bias, where the past fraudulent and otherwise incorrect stuff is forgotten in favor of the great stuff. So it always feels like today is the worst time ever.
> What's the basis to believe that this is worse now than other times in history, or on average?
We produce more science now than ever before. If you have 10 scientists and 9 of them produce rubbish it won't take you long to read a paper from the one who doesn't.
If you have 10,000,000 and 9,000,000 produce rubbish you can spend a thousands lifetimes reading nothing but rubbish.
Social science != Science. (OK I over-react but come on, the field seems barely literate in statistical methods last I looked, i.e. mean != average for all cases)
Medical science has a massive reproducibility problem largely brought on by the demands and pressures of industry funding.
Chemistry is still allowing us to make better turbine blades for jet engines and Physics is still explaining the weird world of quantum mechanics.
Just because some fields can't keep their facts and figures straight we shouldn't tar the whole of "scientific research" with the same brush.
with the dizzying speed, complexity and volumes of reference materials, the single word "degraded" does not have enough descriptive power to start to be interesting. Its just so much more of everything, so much faster, plus all the factors named here.
This comment would have been more apt a decade ago, but we've already seen a substantial correction and changes to the 'rules' in response to the replication crisis. There's an explosion in the amount of shite published in garbage tier predatory journals but most of that is noise that isn't making it into any decisionmaking
There are a ton of problems with how science is done and most of it comes down to the same root cause: greed. You have corporations buying off scientists (if not to rig experiments/falsify results, then to bury unfavorable findings), journals willing to publish any garbage with little (if any) meaningful review, then media companies willing to turn that junk science into a click-bait friendly press release for a company or industry.
The corruption of science into advertising and science by press conference is a huge problem, but it's not the fault of science itself. It's the institutions we've built around science that are responsible. The way we choose to handle funding, the universities, and the journals, these are all systems that were created and they are all systems that can be replaced or reformed. The underlying framework for scientific observation and testing are still solid and remains the best way to further our understanding the world we live in, but we need accountability and regulations in place to keep the output high quality.
It's the same issue we have with medicine. Not enough accountability and oversight has allowed for things like doctors taking kick backs from pharmaceutical companies (the opioid crisis was a good example, but it's been going on for ages) and people like Stella Immanuel who can tell her patients their illness is caused by demon sperm and alien DNA but she still gets to keep her license to practice medicine.
Without regulation and oversight every system is vulnerable to corruption and failure. It doesn't mean you should throw away the system, it means we aren't doing our job to keep it functioning.
> You have corporations buying off scientists (if not to rig experiments/falsify results, then to bury unfavorable findings), journals willing to publish any garbage with little (if any) meaningful review, then media companies willing to turn that junk science into a click-bait friendly press release for a company or industry.
What reason does a corporation have to pay for science that they know will not work? That's self-defeating. At best it's only advantageous in the very short term before you try to actually sell a product that doesn't actually work. (One more reasons these absurd pre-revenue SPACs are horrendous.)
Hypothetically: Let’s say you manufacture Volkswagens, and your small engine diesels include a defeat device that allows your cars to pass emissions tests by enabling pollution controls only during emission tests.
When the world learns of your treachery, you sponsor scientific research in which monkeys are exposed to your modern VW diesel tail pipe emissions and also model year 1990s Ford diesel fumes, to show that you’re really pretty harmless compared to old pickup trucks. Your research is trash, the conclusions meaningless, and scientists cash your checks.
Hypothetically: you manufacture cigarettes..
Hypothetically: your fracking is injecting heavy metals into my groundwater..
> What reason does a corporation have to pay for science that they know will not work?
Profit. The only reason a company does anything. You see it when companies pay for research so that they can get "X product may reduce risk of cancer" in the headlines, or when they want to put "Scientifically proven to Y" on their product labels. Even if it takes funding 100 studies to get the results they're looking for they can just bury the results of the first 99 that contradict their soundbite or ad copy.
The tobacco industry paid off scientists to lie about the cancer risks of their products so that they could continue profit from killing their customers. DuPont did the same thing. The company knew about the dangers of PFAS for decades, but they went around funding research on it and pulling that funding the minute research showed results that their product was harmful. They also paid off a scientist whose job was peer review to protect their interests while hiding his ties to the company.
There are a ton of ways lies disguised as actual science can make money for those with no morals and corporations don't have morals, just shareholders.
With current Covid pandemic. I could be wrong but it seems most studies are looking at effectiveness of vaccines and little to nothing on effectiveness of natural immunity. I say this as someone who was one of the first to get vaccinated. Not that I am saying the pharmaceutical companies sponsored the studies. So to answer your question sometimes it is what is not funded that is probably more telling.
It seems even more like there are groups of people who are dead-set on certain catchphrases (like "natural immunity" or it used to be "herd immunity") no matter what facts they are presented with.
A lot of the antivax folks cling to the idea of natural immunity being better somehow, but who in their right mind would choose immunity from getting infected with a virus that can make you very very sick (or dead) and cause you to infect many others around you if they could otherwise get immunity from a free vaccine with none of those problems?
I am in South Africa. We initially struggled to get the vaccine and first phase was literally a clinical trial for health workers. We eventually did get stock but it seems by then a significant number of the population got Covid. Omicron variant doesn't seem to have affected us as much and surveys have shown that up to 80% South Africans have natural immunity from having had Covid. Yes there are some papers on the topic but government policy does not feature anything related to natural immunity. It wasn't a case of get intentionally infected rather it was a case of getting vaccines later than other nations.
It's certainly not good that so many people were forced to roll the dice initially because of lack of access to a vaccine that already exists. As new waves of variants get spread around data has been being collected so we can learn more about the immunity given by both vaccines and prior infection. Your comment mentioned a lack of research being done, but said nothing about government policy. How do you think government policy should have changed in relation to natural immunity?
From what I've seen a prior infection may give stronger protection than vaccines which is great for people who caught the virus and survived without long term health issues, but that's no comfort for the people who didn't. I hope that access to vaccines has improved in South Africa since whatever degree of protection we get from either an infection or a vaccine doesn't last very long. I'll be trying to get my 4th shot in a couple months.
> How do you think government policy should have changed in relation to natural immunity?
This is my own opinion based on my observations in South Africa. Yes it is unfortunate we could not get access to the vaccine at the quantities and time we would have liked to but the lesson is we need to improve our ability to manufacture vaccines. There are some promising initiatives in this regard. I digress though. South Africa keeps an eye on weekly deaths[1] and can then work out excess deaths. The excess deaths during the recent Omicron wave which peaked at the end of December were significantly lower than the deaths during previous waves. Government continues to encourage vaccination but the restrictions to reduce spread Covid are at the lowest we have had. Even during the peak of the Omicron variant over New Years restrictions were being lifted [2]. So without coming out and saying it, it looks to me that SA government acknowledges that natural immunity seems to be helping keep hospitalisations and deaths low. This is because most of the research in SA anyway tends to place less emphasis on natural immunity but focuses on the vaccine. So all communication from government continues to stress need to vaccinate yet restrictions continue to fall. It could be that I have not read enough but government's attitude has been welcomed.
It's not only greed, but corporatism. People on institutions are entrenched and will defend their own interests even when it's immoral and stupid. And at the end of the day most will have zero doubts about their integrity and rightness because the human mind is amazing and allow for a large degree of cognitive dissonance.
The sad thing is that there was always money to be made by having a greater understanding of our world and developing new technologies. Investing in research pays off very well, but it requires long term thinking. If you only care about next quarter's profits and growth you aren't going to make that kind of investment. Not when you can manipulate science today and get a lot of money right now. 3M and DuPont poisoned the world and even their own children with PFAS for decades after they knew it was harmful because they couldn't resist the money they'd make doing it. For a lot of people, no amount of wealth is ever enough.
Assuming that that paper is not false, it highlights lots of areas/reasons why things can go wrong.
Nonetheless, flawed as it is, the 'scientific method' is the best we've got. "How do you get people to discover reliable knowledge without getting sidetracked by bias, bribes, career incentives, corporations or politics" is actually a very hard problem to solve. The cobbled together bunch of institutions, practices and traditions that we call 'the scientific method' needs to be understood for what it is - a patchy, error prone but just about servicable method for fighting our own biases and shortcomings.
Always good to look for ways to improve it I guess but overly cynical or overly optimistic takes on it don't help much. We need to have a realistic view of it.
The difference versus acedemia of old is that science used to be the hobby of rich gentlemen who were under no financial pressures to pay their mortgage and feed their families. Nowadays, I have no doubt everyone is in their field because they love it and want to further it, but if you have to choose between unemployment or taking on projects that you didn't choose most can live with the latter.
So the solution is to somehow decouple it again. Give universities a blank cheque to employ scientists to work on whatever for sciences sake, not for business. Sure, that's obviously open to exploitation in a new way, but we only need one major discovery for the whole thing to be worth it which we can't get with small incremental research.
> Give universities a blank cheque to employ scientists to work on whatever for sciences sake, not for business.
Personally, I favor the opposite—stop awarding universities the "indirect costs" part of grant awards unless they agree to cap the hiring of grant-seeking investigators. The number of grant applicants needs to be balanced with the supply of public funds, and I don't think the public is willing to pay much more than they already are. And I say this as someone who is currently grant-funded.
I don't know which field you studied, but I think that in physics it's not quite that bad. There might be some research published that's just wrong or fraudulent, but not that much. I think it's because in physics, outside maybe high energy particle physics, it's usually feasible to check other people's experiments. In theoretical physics it's certainly common to work through other people's results before extending them.
However, I would say there's a lot of pointless research. Lots of graduate student and postdoc working hours are spent to produce results that no-one could possibly think are interesting or useful just because papers need to be produced.
I think it's the reduction of the system into a career mill (PhD for students, tenure for postdocs and grant money for professors) that ruins it. Science is inherently too unpredictable to be reduced into an algorithm that just about anyone can use to produce knowledge.
>I don't know which field you studied, but I think that in physics it's not quite that bad.
It's not that bad because it doesn't need to be. In theoretical physics I could have spend all day every day working on problems that have no physical evidence behind them at a top tier university. This is fraudulent because physics should have something to do with the real world. If you want maths for maths sake go into maths.
In your opinion, when was most scientific research not flawed or "useless"? Remember, the Einsteins, Clarks, etc of the world were wildly exceptional outliers.
Do you really think that if you were to look at most research done between 1897 and 1922, the overall quality would be higher? And that there would be less social factors competing with the integrity of work?
> In your opinion, when was most scientific research not flawed or "useless"?
How long has it been since replicating someone else's study to validate the results was routinely part of the scientific process?
>The replication crisis is an ongoing methodological crisis in which it has been found that the results of many scientific studies are difficult or impossible to reproduce. Because the reproducibility of empirical results is an essential part of the scientific method, such failures undermine the credibility of theories building on them and potentially of substantial parts of scientific knowledge.
Isn't that an economic problem rather than a scientific one?
Science doesn't exist in a vacuum, it still needs humans to do grunt work and those humans need to be paid, and there isn't much money in replicating an existing study and saying "yep, that's what we thought".
Even new research needs funding. The point is that you can't trust the findings of research to be accurate if it hasn't been proven to be reproducible. When it comes to science it doesn't do any good to fund research into X if you don't actually do the grunt work and part of that work needs to be seeing the initial results carefully reviewed, and then replicated.
Right now, far too often "review" is a rubber stamp and replication never takes place. That's because often science isn't really being done. If you're Tropicana you might happily fund study after study after study tweaking it each time until you get the results you're looking for so that you can get "OJ may reduce risk of cancer" into the headlines, then bury the results of all the research you funded that contradicted that, but that isn't science it's just advertising. In a better world, anyone involved in that kind of shit would be blacklisted as disreputable if not charged with something.
Research that isn't or can never be replicated is just barely better than speculation, and not really worth much of anything. If someone wants to fund science, we should be insisting that the process is actual science and the results are meaningful.
But again, this is an economic problem. In a perfect world, we'd have an infinite fund for doing science and you couldn't publish a paper until your results were reproduced.
But we live in a capitalist society where incentives are profit-driven (mostly). That's a reality regardless of whether you think it's good or bad.
That's why we need strong regulation and oversight. We know humans are highly vulnerable to greed. We can't (and arguably shouldn't) change that. We can however put measures in place to limit the harm we do to ourselves because of it.
> Isn't that an economic problem rather than a scientific one?
I'll add the other sentence from the first paragraph of the source above.
>Because the reproducibility of empirical results is an essential part of the scientific method, such failures undermine the credibility of theories building on them and potentially of substantial parts of scientific knowledge.
You have to fund it, simple as that. Tenure is granted to professors who get grants. Grants are gotten by publishing papers. Papers are published by conducting novel research.
One thing we could do is mandate that some portion of all grant money has to be given to independent researchers who will work to confirm your findings. I could imagine some downsides to doing this, but at least it would put money in play.
Also, another problem is that the people doing research are usually grad students working toward a Ph.D.. No one wants to do a Ph.D. confirming someone else's results. You'd need some other workforce to do the work of reproducing research. Again, doable but there needs to be money allocated for this task.
> Do you really think that if you were to look at most research done between 1897 and 1922, the overall quality would be higher?
I suspect yes, because the poor research was simply not be being done. Science was not a career, there was no pressure to publish, it was the pastime of an elite few.
> I suspect yes, because the poor research was simply not be being done. Science was not a career, there was no pressure to publish, it was the pastime of an elite few.
One of the critiques I hear about Academia these days, even in hard sciences, is that it is now significantly more dominated by the sons and daughters of elites (who don't have to take out loans and can afford unpaid research opportunities etc. etc.) So it might be on its way back to that.
Not to mention the "publish or perish" ethos that has become such an integral part of promotion & retention policy (and grants) in so many academic institutions.
Perhaps you are imagining less imperfect, less egotistical, less insecure scientists in a past Golden Age of Science. I for one have stopped believing such an era ever existed, or will ever exist. But I continue to believe that the scientific process will continue to serve humanity well despite the terrible flaws of us humans attempting to apply it. Yes, I believe in the scientific process , that it will self-correct, that the attacks by bad actors and self-deluded participants will come out in the wash.
As a doctoral student I had a paper rejected because I didn't toe the dogmatic line of one prestigious reviewer (seriously, the review feedback lacked only the word "dogma"). Terrible, right? But this is a common theme over centuries as near as I can tell, and here we are, living in a magnificent future!
We will continue to benefit from the scientific process, despite all our individual flaws, and despite how easily we each fool ourselves (and sometimes others).
The landscape is different today, which attracts a different sort of character. Back when scientific pursuit was a passion, you mostly had the creative and innately curious types who were drawn to the field all working on the same problems.
Now you have a large swathe of careerists with a handful of curious types, which invariably means the curious types get drowned out in the discourse and ultimately chased away from scientific pursuit.
The incentives are different as well. Peer review and showcasing ideas/work for grant money are new things. All of your work has to go through a committee of your non-curious careerist B-tier colleagues. If they don't like what you're doing, you won't get published and you won't get a grant.
It's now become a Job with the objective of getting all your work approved by committees. So it's different today.
I personally know two people who left Ph.D. programs because of fraud from their professors/departments in research. I think a lot of it stems from working out an experiment or theory for years and finding something that invalidates it. They then cover it up or embellish data to keep funding and not lose all the work time.
At the heart of the issue, as you arrived at, is that our researchers are constantly under pressure to meet some funding goal, constantly looking for their next grant to continue work. This pressure should not exist. It is my belief that, in absence of that pressure, whatever 'fraud' is happening will no longer need to exist. Of all things, researchers should not need to be worried about funding.
As a researcher, although I love the idea of a pressure-free job, I'm also keenly aware that I'm being paid by tax money and therefore should be accountable for my productivity (or lack thereof). I would rather have more robust review of research activities, and reliable consequences for misconduct, than freedom from financial pressure.
I am also somewhat skeptical that removing funding pressure will reduce fraud. It may reduce the amount of show publications, i.e, work that doesn't really make any progress but isn't wrong either. But in my experience outright falsification of results comes from entitlement. A fraudster feels that they entitled to a certain level of achievement and recognition for it, and are willing to make something up when they're not getting that recognition. I believe that the pressure that drives a person to commit fraud is internal, not external.
I think you aren’t leaving any room for scientists to be mistaken, which is an essential part of the process. In fact, it’s impossible to prove you’re correct in science, only to falsify a statement.
Your claims have merit, but they’re stretched slightly too far. The best work is done among the chaos. And doing stellar work is often a lot more chaotic than it appears in camera-ready papers.
Thank you for sharing. I did not expect to watch the entire 1.5hrs, but this story was exceptionally well prepared and presented, educational, and surprising about how far a bit of fraud can go even in hard sciences. Also, why these systems work the way they do.
> if one were to assume that every single scientific discovery of the last 25 years was complete bullshit, they'd be right more often than they were wrong.
Has that maybe been true since the beginning of 'natural philosophy'? Perhaps things seem (much) worse now because we have survivorship bias, the same way classical music seems higher quality than pop, because the 90% crud[0] has been long forgotten.
> Has that maybe been true since the beginning of 'natural philosophy'?
Unlikely. In its infancy, science -- aka "Natural Philosophy" as it was called at the time -- had to establish its credibility. It did so by delivering results.
The science system isn't perfect, scientists are just people, and if you start out with an overly naive view of what the science system is like (as I once did!) you might get disappointed.
Still, you're take is wildly, overly cynical.
At the end of the day science is nothing more than systematically looking for explanations that help to make predictions. There is no way that that doesn't converge on something useful, since every explanation that is of any consequence will ultimately be put to the test and it will become entirely obvious whether it is useful or not.
And so here we are, computers compute, drugs are effective, airplanes don't fall from the sky. Is there uncertainty at the margins? Sure! But no one who proclaims to know things better than "the scientists" due to some fundamental flaw in "science" as whole has anything to say that would be worth listening to.
Because that's not how the world works. Understanding things is not about picking the right team, it's about deeply engaging with the data and formulating logical hypotheses. And once you're doing that, you're now also a scientist.
Edit: To add a bit of personal experience to the somewhat theoretical argument above... I've met a lot of good people and a lot of assholes in academia. I've certainly met a lot of people who liked to make their results sound a bit more important than they probably really were. Never once have I met someone who didn't ultimately care about drawing valid conclusions from good data.
> Educated" people can say what they want about how important it is to believe in the science and have faith in our researchers and the institutions they work for.
This comment shows the problem in the argument. Science does not require belief or faith. In fact the idea of faith is the antithesis of science.
Science is about the hypothesis supported by evidence and remains current knowledge until a hypothesis that is better supported by the evidence.
Questioning is required, inquiry is required, not faith.
The problem is when you would expect someone else to to have faith just because you do. Or equally, that you would expect someone to find an explanation compelling just because you did.
People who don't understand speak with conviction as if they do, but then resort to this chained together appeal to authority as their reasoning.
Yes, I'm sure everyone has a home lab that they go to to verify when they read the newspaper. I'm not being snarky - these illustrates a core problem in epistemology, trust and social systems (in case the NIH/Fauci/Wuhan debacle already hasn't).
Witch exactly illustrates my point. You need to intelligently reason about what you have been told, understanding that you may not be a domain expert and that your conclusions will almost certainly miss something that the domain experts have taken into account.
When presented with a claim by authorities about a new virus and say it's going to be dangerous, do you think, authorities always lie about everything or do you think well virus are a thing, I know they can mutate because I know that if I get the flu this year, it's no guarantee that I will be immune from the version that will come around next year. You also have seen evidence that vaccines are generally safe and work because most people have had them and for the most part don't have people suffering from polio in the western world because of that.
You could say, nah, they lie, this is plot to force us to have genome altering vaccines that will install mind control chips that kill kittens. (yeah, over the top I know)...
Most people have a degree of trust (not faith, very different) until proven wrong that public health figures are actually their trying to improve public health.
I don’t know what is going on in this thread. Did science sneak into your house(s) and kill your pet cat?
I don’t know what Universe you all live in, but I’m in my 40s and scientific research has made incredible strides just in my lifetime. This includes academic research, industrial research, both together or looked at separately. We have made amazing progress in understanding the human genome, the internals of living cells, devising incredibly powerful semiconductor devices, probing the mysteries of the Universe, developing useable machine learning technology and other areas. We have a goddamn space telescope at L2. Even the “controversial” areas are incredibly successful. In the aggregate, climate change papers from the 1980s and the 1990s broadly predict the warming we can measure today, and COVID research (despite being about two years old) has gifted us vaccines that provide enormous measurable benefits and were delivered on a timescale that previous generations would be amazed by.
Yes, at the micro level there are some individual bad results as there have always been and a few fields have had difficulty with replicating small-effect-size experiments, but even there we only know about these issues because scientists pointed them out and began improving their techniques. In the vast majority of sciences, if you pick a point 20 or 40 years ago and look at the tangible progress we’ve made, it’s enormous and more impressive than anything we have accomplished in any other area of human endeavor. I’m not sure if people here are angry at the world, or if they’ve just raised their expectations to unattainable heights, but it’s awfully depressing watching people ignore this progress.
The natural sciences offer a unique opportunity for an individual to change the world on their own. Institutions and peer review exist only to verify that the individual offered the best new model of the world.
Academia has been taken over by an advanced persistent threat of social sciences that are all completely corrupted. That corruption infects and touches every aspect of the academic's life cycle these days. Funding determines most research and doubley so the conclusions. The infection politicizes everything and rewards those who use scientific branding to support the establishment.
I think the expression "see how the sausage is made" is widely misused, here included.
While you might not have enjoyed your stint the sausage factory, sausages have a long, successful history. Handling meat safely is hard, and it has been done with varying levels of success over the years, but historically sausages weren't much worse than any of the other meats despite the nature of their creation, and with modern food standards they are pretty much safe when handled properly (of course proper handling depends on the type of sausage).
If you didn't like what you saw and, as a result, decided not to eat any more sausages, this seems more like a gut response on your end than any particularly deep insight into sausage production. People are generally aware that some of the steps in the process aren't nice to look at, but they judge the product based on the end result.
I also disagree. Nobody documents or publicizes the small labs doing small "useless" stuff. The kind of stuff that seems irrelevant but that slowly piles into a mountain of new information. As a PHD myself, many times I thought to myself: "this research is quite insignificant" and what we found was truly small potatoes, but I can only hope that, in the aggregate, all of us that did small stuff during that period advanced science in significant ways.
It’s often useful to have the obvious or insignificant already established in the literature allowing one to focus on more elusive and revolutionary research!
This highly depends on the field. In Physics the barriers are very high. I mean not necessarily for things to end up on arxiv.org but to be published in a peer reviewed magazine. Although probably that's not for every field and perhaps not even feasible because the data is just not there. Thinking about Medicine, every single data point there is probably a lot of work (and speaking of human trials 4 digit expensive).
Across the academy, there is a movement away from the mission of seeking truth to the mission of seeking justice. The exact sciences are about a decade behind the social sciences in this transition.
Nah. Just because societies and journals in STEMier fields are moving towards being more inclusive in broader practices does not equate to a fundamental difference in what counts as science
Change the economic incentives in academia and research. If right now universities and researches are rewarded for publishing studies/papers no matter the quality, change the rules so that you're rewarded for quality, not quantity.
I think research quality isn't the primary problem. Yes there are bad quality papers being published but it usually highly depends on the institution. The problem here really is unending demand for novelty. I find it weird that academia has a very high risk appetite for research that's "shiny and new" instead of focusing on reproducibility. They'd rather have a bunch of "new" topics which has flaws in their methodologies rather than verifying old ones which has a solid foundation but boring. The thing is, a lot of advisors are incentivized to do so especially in private universities where they can advertise these new papers just to entice more students to enroll with their exuberant tuition.
I'm speaking from a first-hand experience in a private university whose image heavily depends on what "shiny and new" thing their students somehow invent. My university spends so much advertising those things and constantly raise prices at the same time, a shady business model. One of my business professors, who teach in the same institution, also subtly judge the ethics of these kinds of universities.
A lot of institutes are also pressuring their PIs to not only produce novel work but also highly translatable and patentable work. I know of some institutes that want to preview any work a PI wants to publish or speak about in a public venue, in case any of it can be patented. Keep in mind that most of these places are funded through government grants and tax payers but none of the money from these patents is ever funneled back to them.
I'm not in a position to do any of that, sorry. What do you recommend I do right now to obtain the best available public health information for myself and my family?
(1) Obtain a solid understanding of statistics, (2) formulate a hypothesis, (3) collect multiple viewpoints, data and arguments related to that hypothesis, (4) sift through the collected data and judge their quality based on reproducibility and history of the author, (5) be prepared to not reach any solid conclusion
Making a decision for your and your family's health is orthogonal to all of that.
For public health choices for your family, I say do what the majority of experts in the field are doing. And only listen to actual experts specifically in the field of study that pertains to the public health issue. I.e., don't listen to a non-practicing celebrity cardiologist (Dr Oz) about immunology and virology.
For example, you will be hard pressed to find a virologist, immunologist, or epidemiologist that isn't vaccinated and having their 5yrs and older kids vaccinated.
I also like to get my health information direct from doctor groups like the American association of pediatrics. Another good resource is directly on the website of groups like the mayo clinic, Cleveland clinic, or the editorial board of the New England journal of medicine. All of these groups are made up of many expert doctors and issue guidance based on input from a lot of experts.
They are rewarded for impact, which weighs quality over quantity, but only in a complex way that equally weights social dynamics mixed with profit motives. i.e. journals.
The journal editors have a lot of power. As a start they could devote at least half of their articles to reproduction studies instead of novel research.
Science and academia have pushed with all their might over the past few years to convince anybody with a functioning brain that random YouTube videos are, in fact, better sources for truthful information.
You only need to look at the all "the science" which gets ignored by people pushing an agenda to know it's not about science it's about power and control.
The ones who are ignoring "the science" to push agendas are the ones currently without power and control, who wish they had it. Since they cannot earn it honestly through correctness, they try violence.
All of this may be true, but it leads to the question: if not science, what do you recommend instead?
Not being a virologist or epidemiologist, I have to listen to somebody. Are you saying I'd actually be better off listening to Art Bell and Joe Rogan than official sources like Fauci? If so, that's an extraordinary claim if there ever was one.
Accept that there is no source of info you can just believe at face value.
You need to collect information from across your infosphere and intensively de-bias and cross-check it.
It'll take a lot of effort. If that's not worth it to you, go agnostic on things you don't have time to figure out, or join a tribe and shout the slogans with your tribe-friends. Your life choices will depend on your values.
This has been my method for quite some time. It is very time consuming. I rarely reach conclusions, but I get close enough to feel confident in my choices.
Agree 100%. And the same is true for a lot of other things, they're just not as contentious.
Every time I get on a plane I have to trust the science and engineering behind it, I have zero clue why it works but I can't become an expert in aerodynamics just to go on holiday. When I drink milk I have to trust the science behind pasteurization, I trust the physics calculations when going up a high rise and the same for when I take medicine.
I have to trust someone for 95% of my day to day actions, and while a system like "science" is flawed it's the best we've got so far. All up for improvements, but let's not throw the baby with the bathwater.
I mean you raise a valid concern, it's a form of corruption, "muddying the waters", or CV padding that should be combated.
But doesn't low quality science impact the universities and papers that host these things? Wouldn't e.g. Elsevier want to keep charging money for high quality content? Are people and organizations not losing their reputations, jobs, and income over this?
I don't think switching over to laypersons' podcasts is a valid alternative to bad science.
"But doesn't low quality science impact the universities and papers that host these things?"
Sadly, no, it doesn't.
"Are people and organizations not losing their reputations, jobs, and income over this?"
They are not. The system doesn't actually self correct in any visible way. The people who fund bad science either can't tell the difference between good and bad science, or don't care, or both.
"Wouldn't e.g. Elsevier want to keep charging money for high quality content?"
You mean the companies that happily publish gibberish written by spambots as "peer reviewed science"?
"Hundreds of articles published in peer-reviewed journals are being retracted after scammers exploited the processes for publishing special issues to get poor-quality papers — sometimes consisting of complete gibberish — into established journals. In some cases, fraudsters posed as scientists and offered to guest-edit issues that they then filled with sham papers. Elsevier is withdrawing 165 articles currently in press and plans to retract 300 more that have been published as part of 6 special issues in one of its journals, and Springer Nature is retracting 62 articles published in a special issue of one journal."
If that's the only way to hear the full panel of scientists (as opposed to only those who are in bed with Big Pharma), then i'll gladly listen to whatever podcast they are on. And I don't need liberal-lefties to tell me who I should listen to and how i should make my opinions, thanks.
Much more abstractly, even if everyone had perfect motivations, it's science's role to be progressively less wrong over time by invalidating faulty hypotheses/theories but almost never to be conclusively correct. Any claim that we understand how or why something works should be taken with a mountain of salt. Claims that a certain observation were made are usually much closer to actual science than any claims of a conclusion.
For example, Aristotle (generally a very influential and intelligent man) believed scallops emerged spontaneously from sand because of the limits of what he could observe back then. The observation itself wasn't wrong as far as the absence or presence of visible scallops, but what he didn't realize was that there was an entire system that he couldn't observe with the tools at his disposal, and this ignorance lead to an incorrect conclusion.
Even Isaac Newton, who forever changed the world with his work on physics and calculus, was an alchemist as well and had some weird ideas about how to treat the plague of 1665-1666 (https://www.theguardian.com/books/2020/jun/02/isaac-newton-p...). These were also born out of observation and Newton just doing the best he could with the tools he had at the time.
Overall, I believe it's both important to believe in science and scientists (even in the face of those cynically exploiting the system), but also to realize that even at its best, scientific information is ultimately about observations, not the ultimate truth of systems, even if that's the unattainable goal of science.
> Most research is flawed or useless [...] if one were to assume that every single scientific discovery of the last 25 years was complete bullshit, they'd be right more often than they were wrong.
If this were true, then you could easily provide numerous examples of discredited papers published in journals like those of the AAAS and National Academy of Science, and yet you didn't. Your comment is worthless.
After years of studies and working for university I got to agree, at least for most parts. However, I think there are significant variation in quality of science between different fields.
Some fields, like social sciences have long been very politicized and ideological, and this situation hasn't changed for better. I wouldn't take most studies from fields like social sciences, business and humanities very seriously, especially qualitative ones.
Secondly, pay and working conditions in academia generally suck, which means most actually competent people avoid it like plague. This also has an effect on the quality of research. Becoming a researcher these days in my country at least generally means becoming a working poor, who wants that?
Many responses already commenting that science (the process) is intentionally capable of addressing these issues, but that timespan can be longer than we’d like.
One major point in this comment and in many others in support is the observation that the quality of research output is increasingly poor and degraded. It reads to me there is an implicit assumption that because of this the overall process is no longer valid.
It’s not at all clear this is the case, and jumping to that conclusion is taking a very emotional and unscientific stance. Many academics get a shock at some point early on when they look into how things actually work at a small scale, and often later in their careers come to see the effectiveness of consensus and reproduction (or don’t care and just try to get credit for them and theirs). Jumping to the conclusion that science is somehow falling apart requires taking about change over time, not point observations.
Now I’m not providing evidence either way, and definitely see reasons why it could be the case. But at the same time the volume of scientific output is larger than it has ever been, and many fields are developing at an unprecedented rate. And we’re seeing the result of this in rapid technological advancement in many fields, so it’s clearly not all complete horseshit. It’s possible, likely even, that due to comparatively rapid advancement in comparison getting to solid consensus appears to take longer.
You have hard science where the research is based on repeatable experiments in a lab where you can clearly isolate every other effect.
You also have softer sciences where experiments are difficult or impossible, and where people try to come up with the best approach. Climate science falls in that category (you can’t really experiment with Earth’s atmosphere, at best backtest your model to check it is compatible with your data).
The problem is that medicine falls within both of those categories. Some parts are fully replicable (like test 3 anti-allergy creams on the same skin at the same time), but many others don’t have the luxury of being able to experiment on humans. So you end up relying on correlation analysis with low samples and a million possible variables. Some bits like epidemiology are more akin to climate science.
The problem with these softer sciences isn’t that they aren’t science (the progress of medicine over the last century is prodigious), but that it’s hard to trust any study individually. So when someone points to a paper that shows a reduction of X by Y% if you do Z, my first reaction is skepticism. So I am struggling with the whole “follow the science” thing, and the idea that any objecting opinion must be crushed.
I spent 6 years in a top 3rd unvieristy in US, working on Nanotech. My resepect for research is not same after that. I felt everybody runs after Science/Nature papers and funding.. I haven't found honest curiousity to discover new things. The funding organizations are also to be partially blamed, instead of funding a good idea / projects, they started influencing and interfering in the research for their interest.
Yes, but there is still hope. Not in the current academia (IMO rotten beyond repair) but in human nature.
Because have you noticed how when something "real" hits the science shelves, any honest research grounded in reality and careful thought, it goes like hot cakes? The research doesn't even have to be revolutionary: anything repeatable, or at least sanely executed. The bar is incredibly low.
Everyone is starved for real ideas… or at least real implementations. Anything authentic, in the sea of marketing BS.
Which I find a hopeful sign! As long as that human instinct persists, the truth bubbles up. People still "feel" wrong when lying, which is great. When that stops (a question of culture?), we're in real trouble.
Funny aside: The front page of today's national news: an elected university rector had to step down. Reason? Academic fraud!
> The biggest problem science is facing is not an external threat from a rabble of ignorant science deniers, but the complete degradation of quality within the scientific institutions themselves.
You're correct, in that what you describe isn't a threat to the ivory towers of 'science'.
Unfortunately, here in the real world, outside of academia, those people's unassailable belief in all flavors of bullshit has quite a negative effect on the quality of life of the rest of us. But at least I can sleep soundly, knowing that 'science' isn't being harmed by them.
When, say, the planet burns down around us, because of inaction on climate change, I'll take solace in knowing that the science behind climate change was never in any real danger - unlike the rest of us.
Given the boogey man was that we'd be out of coal by 2020 in the late 90s (I can dig out the material from British schools to show you this!) I think we can survive the over-exaggeration of how soon and alarmingly and the end of the world is coming.
As I keep saying, why can't we just make the planet better? Why is that so difficult? What's the risk?
I think you're hitting an important point here. The academy at its best is meant to be like a hospital for people's ignorance about their subjects and neglectful, intellectually lazy practices. What you're commenting on right now is the many forms of proverbial sicknesses that have gotten out of control. I personally think if the situation is going to be improved, it's a "physician heal thyself" situation, where some of the very people guilty of p-hacking and junk science are even now being called to come forward and change, taking on a risk of their careers as a sacrifice in exchange for a real possibility for us all to move forwards.
I'm assuming you entered a field where you weren't prepared for the egos and controversy surrounding a topic.
This is what a lot of students hit if they're ill prepared (this is not a bad thing on behalf of the student). I remember my advisor saying he saw it as his responsibility to shield the new students from something crazy controversial until we knew how to walk for ourselves, make plots quickly easily, understand the jargon and materials... aka until we were mid-way through, then we should be thrown to the wolves to form our own opinion on topics.
You're now falling for the daily mail mistake of 'big sensational headline in field X' therefore it must be false.
Just look through the nobel awards for understanding what directions and leaps fields of science are making and ignore the self-perpetuating egos that end up on the front of 'Nature' or the daily fail.
We have cloned animals, we have perfected fiber-optics, we have discovered the nature of the Higgs Boson, gravity waves and I don't even understand enough about chemistry to understand why or how that field has advanced but it clearly has.
Unfortunately I'd suggest your supervisor failed to prepare you for your PhD.
Don't worry I can point to several papers which have maths flaws in that impact the result by 2-sigma, but nobody cares about those results because it doesn't change the over-all message. "We saw nothing after intensive scrutiny and looking for it." despite being on flaky mathematical groundings.
That 'rabble' of science deniers is growing very quickly (just look at the evolution diners in the US) and keep making arguments like. "you can't take the people of of science".
If the rabble gets big enough, politicians listen and then when politicians are listening to anti-science people on whatever topic, funding inevitably gets cut or re-directed and scientific research suffers.
Don't worry I'm not talking about types of research that supports raytheon by making bigger bangs, but research that cures cancer in babies and infirmity in adults and makes better and materials for the world...
I don't disagree that there's degradation of quality. There should be a lack of trust in our institutions (and really a need to invest in them more). What confuses me though is why there is trust in sources that have so much less reason to be trusted. The forces undermining the quality of scientific institutions are at play outside those institutions as well, and outside the framework, there's far less of a defense against them.
We're seeing all kinds of chicanery that I had thought had largely been consigned to history, but it's all making a comeback.
> The biggest problem science is facing is not an external threat from a rabble of ignorant science deniers, but the complete degradation of quality within the scientific institutions themselves.
With respect, you can make this criticism about every single field and discipline. This is because the endemic problems under discussion have nothing to do with science per se, but are unique to the human condition.
The optimistic outlook is that science is an integral part of human progress, technological or otherwise. The implication is here is that even if there is a lot rubbish papers out there, they will be weeded out, because progress demands practical concepts that actually works. So dodgy work of "scientists" that rort the grants system will be ultimately discarded and forgotten.
This forum and all the infrastructure that supports it was based primarily on hard science and conducted in the "hot path" of tech progress where inaccurate results are both unacceptable and quickly falsified: mathematics (Shannon, etc.), physics, chemistry, materials engineering, electrical engineering, and computer science.
There's a reason technology has changed the world significantly in the last 50 years and other scientific areas (e.g. medical research) haven't as much.
What other areas have you experienced? What you're describing is just human nature, every field of human endeavour is like that, because they're run by humans.
Despite this, we have made unimaginable scientific progress over the last few 100 years, and we are continuing to progress at and even faster rate.
The rabble of ignorant science deniers aren't a problem for science, but they are a problem for policy. Reality itself can't sue climate change deniers for defamation, and for all intents and purposes they have control of policy in the world's largest economy as long as the US senate is deadlocked.
As someone who also works in Academia it is astonishing how many people come into this thread who have no background in the field cast doubt on you. There is a serious degree of conservatism there is among liberals who think that science and academics should not dare be questioned.
Funny how "don't believe, always question everything" have transformed back into "believe in science".
Believing in science is a very old thing, Maya have believed in their eclipse prediction science so much that were even making human sacrifice to save the sun from being eaten.
We should teach the public not to "believe in science", but the main skill of any scientist: to try to find errors in the theory you think is right with more ardour than in the theory you think is wrong.
I think we got there by mixing up science and policy. In an emergency (like a pandemic) you can’t have an atmosphere of doubt and uncertainty in decision making - someone have to make the tough calls and stand by them. Unfortunately we don’t have clear delineation between “science” (what and how we know, and how certain we are) and “policy” (what should we do based on this information) in the public discourse - the same talking heads are often representing both. Then we naturally end up at a place where we have to either “believe in science” or cast doubt on policy decisions, which seems suboptimal.
Why can’t you have an atmosphere of doubt and uncertainty in decision making? Doubt is always better than false certainty. Public figures lying during pandemic have only prompted more people to be distrustful of science and the official policy decisions.
Because then nothing would ever get done. We would debate endlessly whether declaring the war on Germany was a good idea, or whether going to the Moon is a stupid goal, etc. etc. There’s certainly no shortage of arguments against any decision. Once the decision is made, continuing to doubt it is often not helpful.
I actually think this was one failure in our response to the pandemic. Take N95 masks or testing or what have you. There’s been so much back and forth on how well things work and changes of messaging - often driven by real studies, thus reflecting the uncertainty we’re talking about! - and I think that’s the cause of some distrust you’re talking about. In an alternate reality CDC would came out on day one to say “N95s are obviously good, and so are widely available rapid tests so let’s make a ton of them” and stay on this message. It’s the uncertainty and doubt that creates distrust, not the other way around.
But people were debating about all these things, war on Germany was not declared for a very long time, war in Vietnam was abandoned not sure if consider that good or bad.
The initial stance on masks was to lie that they are useless so that people don't buy masks that were needed for doctors, so if CDC was to stay on message it would stay on that message not the one that masks are good.
Your proposed approach of a few people making decisions and the rest not even being allowed to debate, can not create trust, it can only create USSR or North Korea.
I’m not sure how the discussion got to this place. Please allow me a reset. My point is only that ideally people in charge of policy should be upfront that they’re making decisions based on imperfect information, and it should not require people “believing in science”. This would have the effect leaving the science, with all its messiness, to scientists, and will be better for everyone involved.
A sidenote on CDC, the null hypothesis is institutional inertia: there were no mask mandates during many recent pandemics so CDC was simply following the long standing policy. This doesn’t require lying: if you believe that a policy is useless and will actively harm supply of masks it makes sense to come out strongly against it. I’m open to evidence to the contrary though.
Not sure what you are saying, but I do believe that we scientist need an extra big dose of humility and we need to learn how to better accept when we are wrong.
Those are unsubstanciated statements about domains you know very little about.
These days science is so specialised and lacking narrative that you can reach PhD level without any notion of what's really going on in your field, and even less into others.
Wow man...totally nailed it on the head. And anytime you bring up questions around experimental design or anything, 99 percent of people are already yawning and not interested in anything you're saying at that point. Everything happening right now regarding the vaccines and COVID is alienating so many people and guaranteeing they will never listen to the government or any other "trusted" institution ever again.
Where did you study, what was your concentration, and did you finish? I'm not sure whether anyone on HN ought to care about your opinion. I'm not convinced that the sausage factory you experienced was a hot dog cart.
"The fact remains that if one were to assume that every single scientific discovery of the last 25 years was complete bullshit, they'd be right more often than they were wrong."
> The fact remains that if one were to assume that every single scientific discovery of the last
> 25 years was complete bullshit, they'd be right more often than they were wrong.
This is not a fact, it's a claim (may even by a hyperbole, we don't know for sure). But even at that, you're overlooking a very important perspective: there are more important and less important discoveries (results) and thus the effect of them being wrong should also be weighted. I'd also be inclined to think that the more important ones are scrutinized more and thus are likely to be right more often.
The great and unique thing with science, not ignoring the issues you've raised, is that it can correct itself. And it's better at it than everyday people are. So yep, the issues mentioned by you and others should be corrected (and some of these, like the replication issues and the publication bias, etc. are well known which means that I'm sure they are being worked on). But the general public also has to learn that science is still our best chance to understand reality.
Because most science (and thus reality) deniers usually just look at it as if it was a simple choice between "do I believe them or not" as in "do I let myself be lied to/mislead or not". Whereas the real choice is who do I rather believe? The science guy or myself (without knowing much about anything, basically). Which will give me the better results? Because when I have to make a decision (e.g. vaccinate or not) then it will be a decision between these too. (Just like "vaccinate or not" is really a decision of "contract the virus with or without being pre-immunized by the vaccine". And not "do I want to risk the side effects or not".)
feels like this is part of a larger pattern of corruption up and down institutions in western civilization. everything is gamified, monetized and rigged.
Respectfully, I disagree with you, it's mainly because of social media. Academic degradation definitely contributes (it give conspiracy theorists ammo) but if you've been exposed to the echo chambers that drive anti-science conspiracies I think you'll realise how toxic the movement and social media. Mainstream social media is the gateway and it surfaces these influencers to the curious with less harmful theories like Invermectin then to antivax and homeopathy and mistrust in doctors and traditional drugs.
A lot of these influencers have been deplatformed and flocked to alternatives like centralised telegram groups, social media. Non-conspiracy people treat conspiracy people like outcasts, no one likes arguing with no conclusion when the answers is 'I don't believe your source' so they retreat to these echo chambers for social acceptance. I've seen this happen to people I know, they just doom scroll, share anti-tech, -media, -pharma, -government memes. They become rabid in their beliefs and there is no pulling them out. I've seen this firsthand infect people who are "smart" university educated, career success. It's a shame, part of decent people I know are gone. There is a simmering undercurrent of I'm smart and you're an idiot and its probably felt both ways. I'm not from the US, but I've read accounts of what Fox News does to people and it feels similar.
Or, such narratives are created, just as narratives that all politicians are corrupt, that meritocracy is wrong, as an attempt to weaken western democracies.
Most nations spend a notable portion of their defense budget, on disrupting opponents via whisper campaigns, and dividing an opponent from within.
This has been going on through all of recorded history.
Disrupting belief in any important part of your opponent's society, is key. The goal is to cause societal disfunction, an inability to work cooperatively.
Why fight a war, if your opponent destructs from within? Or, almost as good, loses some capacity to make war?
Also good, is if your actions cause your opponent to change regimes, and become aligned with your goals.
So how do you think this gets solved? What do you think some of the root causes are?
Semi-related, I was reading about the history of some of the "US greatest universities"; Harvard, Yale, Princeton, etc. were actually Christian Universities? Wasn't even Cambridge founded in the 1200's have similar history?
I'm not sure of the complete validity, but it's an interesting perspective for sure.
I even remember reading Einstein working alongside Father Lemaitre, a Belgian Catholic priest (Founder of Big Bang Theory) on some groundbreaking work?
I've always joked that maybe G-d gave us the computer as he felt pity on us for it took 4000 years for man to calculate several hundred digits of Pi, and who knows how many great minds dedicated to hand calculating only a more "perfect circle". Didn't even Archimedes die calculating Pi?
"Nōlī turbāre circulōs meōs!" a Latin phrase, "Do not disturb my circles!". It is said to have been uttered by Archimedes—in reference to a geometric figure he had outlined on the sand—when he was confronted by a Roman soldier during the Siege of Syracuse prior to being killed.
The root causes are scientists are experts in operating tools, just like many programmers can write code, it doesnt mean they understand the process.
The medical profession is largely still the snake oil industry it was hundreds of years ago, and Govt legislation enables their monopoly.
I have as little to do with the NHS as possible because these so called experts are experts in minutiae and not the big picture. They prescribe what they have been taught. The Govt/NHS also limits a GP's prescribing ability so private healthcare should be the gold standard not a communist era make work scheme like the NHS.
You try getting a doctor to explain their thinking, most will not, and when you challenge them they go all jihad on you, in a western country of all places!
This just demonstrates cognitive dissonance in what they have been taught, but any country or institution or individual who believes in sky faeries just shows humans can hold some diametrically opposed beliefs which are irrational and should be cause to be struck off any so called professional register!
The biggest problem science is facing is not an external threat from a rabble of ignorant science deniers, but the complete degradation of quality within the scientific institutions themselves.
Most research is flawed or useless, but published anyway because it's expedient for the authors to do so. Plenty of useful or interesting angles are not investigated at all because doing so would risk invalidating the expert status of the incumbents. Of the science that is communicated to the public (much of which is complete horseshit), the scientists themselves are often complicit in the misrepresentation of their own research as it means more of those sweet, sweet grants. The net result is the polished turd that is scientific research in the 21st century.
"Educated" people can say what they want about how important it is to believe in the science and have faith in our researchers and the institutions they work for.
The fact remains that if one were to assume that every single scientific discovery of the last 25 years was complete bullshit, they'd be right more often than they were wrong.