Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The illusion of evidence based medicine (bmj.com)
324 points by pueblito on March 24, 2022 | hide | past | favorite | 261 comments


My wife and I were just talking about this yesterday. She is an extremely specialized surgeon and she is horrified by the academic publishing in her area of expertise. It is often utterly wrong and conflicts directly with her experience. I'm not sure she'd agree with the reasons mentioned in this article here as the main issue - although we have personally seen less ethically motivated doctors publishing research to help line their pockets via the industry. In her opinion the main issue is that academic doctors HAVE to publish in order to advance their careers. So you end up with a never ending stream of terrible research, churned out by doctors that are not experts in doing research and that actually spend most of their time doing other things. This happens even at the best centers; at least in her area of expertise. I don't know what the solution is, but I do know that whenever she has needed treatment for herself she generally ignores most of the research and talks to the most experienced surgeon she can find.


> My wife ... is an extremely specialized surgeon and she is horrified by the academic publishing in her area of expertise. It is often utterly wrong and conflicts directly with her experience.

Genuine question: how does someone in her position know they're right and the research is wrong, and not the reverse?

I'm certainly not denying that she may be correct, and I'm not unquestioningly defending research evidence either.

We've come a long way from "eminence-based medicine", where the senior clinician was impossible to challenge. Research has helped us move towards a more nuanced approach to care (I would say "holistic" but that word has become tarnished).

EBM "phase one" alienated clinicians despite explicitly acknowledging clinician expertise, by overemphasizing evidence and undervaluing experience. EBM "phase two" aims to redress the balance, but perhaps the mistrust is too significant now. It would be a shame if so.

Some reading:

https://www.bmj.com/content/348/bmj.g3725

https://www.cebm.ox.ac.uk/news/views/it-is-time-for-evidence...


> Genuine question: how does someone in her position know they're right and the research is wrong, and not the reverse?

She doesn't know, and that's the problem with medical research: being a specialized surgeon is like a carpenter who's made hundreds of a particular kind of furniture. Then a scientist comes along, saying we should change how we do it. As an experienced carpenter, you know that most of those scientists actually have very little practical experience, because they used that time to publish and are actually very bad, practically speaking. And therefore oftentimes reach nonsensical conclusions. So, what to do? You go ask the most practically experienced guy you can find, hoping he'll be able to guide you. Repeat 100x/year.

I'm not exaggerating: some of our academic practitioners who've published a lot are really among the worst practitioners I know, and do kill people who should have lived on a regular basis! Ironically, some of them are respected professors and patients often seek their care thinking it'll be better. One of them is a surgeon and has received an award for outstanding teaching. He's the absolute worst surgeon I know.

So, no easy conclusion here...


Not the OP, but one of the organisations I work for publishes best practice guidelines for diagnostic tests for various rare genetic disorders in an attempt to avoid this issue. However the process takes between 1 and 2 years and starts from identifying a group of leading clinicians and academics who specialise in that condition who are willing to write the guidelines (on their own time, for free). They then do a literature search to gather any relevant material and at the same time put out a call for submissions to clinicians around the world asking them to contribute. A (usually virtual) conference is then held and the writing begins. Over the next few months a set of draft guidelines are produced and then put out for comment. After a period for feedback a second draft is produced and sent out again. A consensus building exercise is then conducted (aka another mini conference involving the guideline writers and others) and the guidelines finalised according to the consensus. They're then published in a peer reviewed journal.

As you can imagine this takes time and money and all of it is sponsored by a not for profit EQA organisation, with no pharma companies involved (since they're who provide the tests and that's a massive conflict of interest). As a result the org only manages to produce two or three guidelines a year, when really they'd like to be able to produce a dozen or more. And they also need to be reviewed at regular intervals which while slightly less long winded is still long winded. Really this is the sort of thing that is a prime candidate for being sponsored by government, and sometimes it is, but since these are rare conditions the cost / benefit analysis is not always favourable.


I can totally believe that. What you’re describing has an uncanny resemblance to academic software engineering research: many of the published insights scale up all the way to one semester student projects — in other words, are completely useless in the real world.


This reminds me of agriculture under the Soviet Union. Stalin wanted to build a technocracy with the brightest in charge of there specialized fields, so he brought in the most famous Russian agronomist, Lysenko, and gave him complete control over the Soviet agricultural system. Lysenko took control away from those ignorant and backward Russian farmers and revolutionized Soviet farming with all the latest scientific breakthroughs.... It wasn't long before millions of people were starving to death.


Except that this is not the only (or even the main reason) the big starvation happened. It was because Stalin had to pay the US for their technological knowledge and deliveries of factories and as the Ruble wasn't worth much the US demanded to be payed in natural resources, mainly agricultural products. As Stalin saw the writing on the wall in regards to the danger of a middle European nation trying to overtake Russia he made that deal and ferried most of the Russian produce towards the US.

So to be able to (at first) produce tractors and later switch that to tanks he sold his people to the US that knowingly and gladly accepted the cheap food.

So both nation's political and industrial leaders are responsible for the mass starvations. One can't just blame it on Stalin or on Lysenko.

But Lysenko might have made the problem even worse. That much we can agree on.

Source for further reading: https://twitter.com/kamilkazani/status/1505247886908424195


So if a parent spends their whole paycheck on a new shotgun and their child goes hungry, it is equally Remington's fault. Got it


Remington’s raison d’être is producing shotguns.

The US’ raison d’être is (hopefully) not knowingly starving millions of people to death. When you’re making a deal of that size (between countries, not a company and consumer) no deal is purely transactional, there are ramifications.


The point is that the US (government) knew at the time of these ramifications. So in my book they were complicit.

Not that any other country would have acted differently. I am not here to metaphorically shit in the yard of the US. It is just business as usual when it comes to politics. Sadly. I just don't expect people to act with integrity when in positions of power. I still like to be pleasantly surprised if it happens - I just don't expect it.

Edit: Typo


It took me a while to understand in which direction you are arguing.

Of course the US is complicit (just not solely complicit as you imply I claimed - nice rhetorical figure by the way just a bit too obviously applied). They knowingly let a dictator pay them in natural resources while these resources were lacking for the people and this lead to famine and mass starvation.

So to stay in your image: When Remington sells me a shotgun knowing I will go and use it to kill my family (because it was clearly visible) they are complicit.

If the US would not have know the consequences of their deal with Stalin for the ordinary people we would be talking about a different situation here. But the US knew. And they still took the grain and other resources. And that makes them complicit.

In my world, that makes the U.S. very clearly an accomplice to a dictator who is starving his people. And also profiteers from such a system.

But hey as if this were something new or shocking. I don't know about US educational system, but i thought it was public knowledge, that the US acted that way as their standard modus operandi for many decades now.

No matter if with South American dictators, with African dictators or even with dictatorial regimes in the Middle East, as long as they do not become too rebellious and do not challenge the supremacy of the USA the US are willing to trade and not look too closely towards the crimes against humanity these dictators are performing.

Take Saddam Hussein as an example.

Wasn't it the U.S. that continued to financially and militarily support Saddam Hussein in the Iraq-Iran war even after he openly used chemical weapons? Without the U.S., Saddam would never have become head of state in the late 1970s. And the US gladly bought his oil - ignoring what he did.

This is the same as we Germans are currently also responsible (at least in part) for the prolongation of the war in Ukraine, because we still do not want to give up the gas supply from Russia. We are prolonging the war because our money is helping to finance Putin's military advance.

Our convenience and the intransigence of our government makes us complicit in the death of many civilians.


> So to stay in your image: When Remington sells me a shotgun knowing I will go and use it to kill my family (because it was clearly visible) they are complicit.

No, that's a very different metaphor.

The item being bought causes no harm. The problem is reckless spending. That puts the bar of responsibility in a very different place.


If the parent needs a shotgun because of Nazis on their front door and has no real choice, then yeah, pretty much. Not "equally" but not zero.


False, Russia was attempting to rapidly industrialize and needed to purchase foreign technology and needed foreign currency to do so.

So he sold the grain that would feed his people in exchange for USD, Deutschmarks, Pound Sterling that he could use to buy industrial equipment.

I mean, it's not exactly out of place for Stalin to sacrifice his people's lives to realize his goals for Mother Russia. Plus the added benefit that the people dying were troublemakers anyways (Ukrainian nationalists).

Do note that when the famine did strike, those same foreign countries offered famine relief that Stalin promptly turned down.


That thread establishes that Stalin exported agricultural products, it doesn't say that the US businesses were trading directly for grain.

The US was probably a net grain exporter at the time, so I think you are not presenting the situation very well. There is certainly complicity in working with an oppressive government that is starving its people, but saying it was done for "cheap food" when USSR was selling the food elsewhere and paying in dollars isn't the right way to present it.


It's a great way to present it if you were a present-day dictator and wanted to justify invading a neighbouring grain-producing country and blame it on historic wrongs perpetrated by the evil USA.


> ...had to pay the US...

had to or chose to?

It's an interesting thread, thanks for the link, but "knowingly and gladly" is not supported there, I don't see how you can lay any responsibility for Holodomor at Henry Ford's feet.


This is happening in Sri Lanka on a smaller scale right now under the guise of organic farming.


Afrer getting degree and working in the software industry im kinda worried when going to the doc, LOL.


Not to trash the profession, but in reality it's no different than a computer engineer, plumber, accountant or anything else.

I used to think "well med school is really hard, so the bare minimum for a doctor is pretty good". Then I started working in healthcare and realize, no, that's not true at all.

The best example I can think of is when I worked on a project where we wanted to get ready for a much better competitor in a disease area. Not "slightly better", but "mind blowing better and not using it would be malpractice". Most doctors had heard of it, but we want to remind them not to purchase the company's older drug because they'll just return it a year later.

Turns out there were multiple doctors, after the new drug was approved and available, that continued to use the old drug. We literally had to go to them and say "hey stop buying the company's drug please". And I realized, damn, these doctors' patients are getting really shitty treatment.


This is why pharmaceutical sales representatives are so important. Most doctors are far too busy treating patients day to day to stay up on every new drug that comes out. CME can help cover some of this, especially with new procedures or treatments, but nothing can keep up with the 100 new drugs for diabetes. what really helps doctors is having people that can come talk to them, tell them that drug X is better than Y, give them the literature if they want it, tell them what insurance has a chance of covering it. Some doctors will always do what they know, but the majority just need to know there is a better option.


Or just follow the clinical guidelines and associated updates. They're there for this exact reason, to make sure every doctor uses up-to-date methods. No need for sales reps in this case.


How quickly do you think clinical guidelines are updated.

If a new cancer drug was launched that could help would you prefer a doctor that said “sorry, the next guidelines update is in 9 months, so I’m not using that drug”?


How can they trust the literature? The article was just talking about how the evidence can't be trusted.


Doctors aren’t stupid. They can judge literature themselves.


My 60 years old father was at the hospital for a heart attack and the doctor wrote in his form he was a pregnant woman. That day I lost hope for any kind of baseline.


> scientist comes along, saying we should change how we do it

That doesn’t sound like science. An example or pointer would be helpful.


Remember the start of the pandemic? Some prof published that we should withold steroids and intubate ASAP on the basis of preliminary evidence, throwing basically everything we knew about ARDS out of the window. Most reasonable ICU guys were skeptical on the basis of their own practical experience, but had no choice because who would dare contradict science, right?? We very likely killed many people that way.

What do we do now? The exact opposite, and much closer to usual ARDS practices. You can argue that wasn't a mistake because it followed the process of scientific discovery, but it's nevertheless a tragic example of experience trumping the advice of the scientific community.


I feel like we should be critiquing society’s response to science rather than implying that their paper should never have been published.


> implying that their paper should never have been published.

That's really not what I meant. I have no problem with the publication. But put yourself in the shoes of someone such as myself, who's intubated tens of cases on admission and probably killed many doing so. Wouldn't you be questioning the structure of our research system then? Not that I have any better solution at hand, mind you...


> but had no choice because who would dare contradict science, right??

This is what I don't get. Science is contradicted and questioned all the time. It's the main mechanism by which science evolves.

Also a crucial feature of science is that it's literally always wrong. Science is the iterative process of building models of reality, and those models by virtue of being models never 100% represent reality. Scientists understand this and are always working to improve our knowledge by further refining our models.

Why do you feel like science is not to be questioned? Moreover, if you're dissatisfied as a practitioner with scientific review, why don't you get involved more with the review process? In my field practitioners publish and are involved with our research and peer review all the time, is this not true for your field? Can you work to change it?

In my experience if you don't agree with published research, you don't just lay down and accept it, you do your own research to counter it, and publish that with a big fat citation of the old paper saying they're wrong. Where does this idea come from that science is proclaimed from on high? If you don't like it, roll up your sleeves and prove them wrong -- science is done from the bottom-up.


What you're saying is technically accurate to the intended definition and purpose of science, but if you're seriously asking why these people didn't question or contradict, I'm not sure where you've been for the last several years. So many pandemic response items, in particular, have been "Trust the science"/"Follow the science", and any questions or possible alternatives to the prescribed Method Of Treatment are not only dismissed, but actively persecuted.


Thanks for clarifying. I was actually going to respond in a similar manner as ModernMech. Your original comment came off very defeatist towards the scientific method. I see now that it is aimed at our collective response to the scientific method. I also share your disdain towards this response, especially with pandemic response.

At the same time though, I get it and I do this too. This might be too reductionist but I think most of it comes down to trust. It's very expensive, in terms of time, to have a critical understanding of any specialized field. So taking the shortcuts like past experience, intuition, relying on others are more profitable. Especially if the goal is survival and not understanding of a system. This then becomes a game of finding the most "trustable" person but that's the catch 22.


To be clear the persecution has always been a feature of the scientific process, well before Covid. I don't like it or think it's necessary, but look around -- who can argue with the results? It's always been there, and no one has figured out how to do science without people getting really attached to their ideas and then attacking (sometimes very viciously) others who go against them. Science has factions and politics and rivalries, and all the inherent human baggage associated with those words. It's never been a bunch of mentats or vulcans deliberating over the best ideas without emotion; it's been apes warring over ideas, and not even necessarily the "best", or "good"/"correct" ideas. Did we forget the whole concept of tenure is so that your institution can't fire you if you start thinking the wrong things? Talk about dismissal and persecution!

I don't know who came up with this notion that science is high-minded (probably a scientist) but it's really as petty as any human social process. Go back through history and you'll note plenty of scientists persecuting some other group because they're challenging the orthodoxy. If that makes you uncomfortable maybe science isn't for you, but that's the truth of how it's always worked (and always will work unless you can solve the human angle of this).

Most ideas, especially scientific ones, are almost never accepted on their own merits. They have to be forcefully pushed on the world, and that ruffles some feathers, because that means crowding out some other competing idea. Remember we call this a "marketplace" of ideas. Not everyone comes home from the market happy.

I guess my bottom line is: welcome to how the sausage is made. During Covid some people got a look and found out that they don't like it, shocker. The good news is you're allowed (no one is stopping you) to participate in the process with your own ideas. You just have to steel yourself against criticism and inoculate yourself against the inevitable retaliation and pushback. If you believe in your ideas, don't give in!


This is an excellent question (and reading through those articles now, thank you). I know she thinks about this a lot because her patients bring her research which is obviously broken, but why should they trust her? In her specific case she has done thousands and thousands of an extremely specialized operation - one of very few people in the world to have done that many. She keeps an open mind of course, but she also sees a lot of harm done. Medical research is hard, lots of confounding factors, and almost all the time in her particular area researchers have a very small n, so they draw conclusions because of the small n (and because they are often just not good at research, they are doctors!). I'm sure the scientific method will come through in the long run, that's my opinion, but along the way it is doing a lot of harm too.


>I'm sure the scientific method will come through in the long run, that's my opinion, but along the way it is doing a lot of harm too.

And that's the rub. Most people who try to improve the system reinvent it with the same or worse flaws. You'll notice for all the hand wringing and complaining here about how flawed the process is, no one really has put forth a suggestion on how to fix it. A lot of people here are not even part of the process and looking in from the outside, so they don't even understand the incentives involved enough to fix it. Hence the endless complaining...


Not a surgeon, but in my area often it's not even so much that the results conflict with experience in a clinician versus researcher sense (although they often do), but that the research design itself is obviously flawed, in the sense that the assumptions made in the design, or the corners that are cut, or the way the data is massaged, or whatnot, are completely unrealistic, and the conclusions being drawn are completely inappropriate compared to what was done.

It might even be that you don't have to be an experienced clinician to tell something is wrong, but an experienced clinician would pick up on the problems immediately just through familiarity.

Feynman used the term "cargo cult science" and in my experience it applies to EBM more than any other area I've encountered.


As a statistician, I see a constant flood of incorrect statistical conclusions and poorly designed experiments.


The fact of the matter is that the vast majority of people are just barely competent at even bullshit paper-pushing jobs, let alone highly technical work.

The reason the majority of people end up with higher degrees and the jobs that require them has more to do with wealth/class and social privilege than actual competence or intelligence. It's almost to the point where people who are actually good at something end up with an appropriate line of work almost by coincidence.

That's the natural outcome when you organize your society by wealth instead of skill.


"how does someone in her position know they're right and the research is wrong, and not the reverse?"

This is where you pull out the rusty ol' tools of Figuring Things Out From Evidence, which mostly boil down to:

come up with your own theories and listen to theories from others

think really hard about which one explains historical evidence the most consistently.

think about experiments to make sure theories aren't overfit to historical evidence

conduct those experiments, and "penalize" theories which don't explain the experimental results

these tools are strange things because they (noisily, but surprisingly reliably when applied truly honestly and openly) predict which theories are predictive of future experimental results.

and that's all you want out of a theory, really.


and why should anybody else trust her? well, if she just says "actually, I'm right and the science is wrong", nobody should! Anybody can say that, regardless of whether they're right or not. I could say that!

if she says, though, "actually, I'm right and the science is wrong though, and here's why:"

and explains her entire process, what results she's seen hands-on in her own surgeries, and is careful to also include anything she saw that isn't consistent with her theory..., shows her reasoning, and shows her work... that's convincing and persuasive. It's also not easy, and requires lots of work. So does figuring out things in general.

And that's called "an argument".


And if one bad paper causes a thousand people to have to put in all that work to rebut it, we have a big problem with the system.


It’s sometimes called reasoning from first principles too.


Cannot speak for to this situation in particular, but some highly specialized surgeons track results, in- and out-going data etc. very thoroughly and might therefore actually have better visibility than a researcher looking at a small data set.


You need just one contradicting observation to invalidate something, while a million consistent observations can never fully prove it.

(Paraphrasing Karl Popper, mentioned in the article).


Right, but you only have aggregates here moving the needle. For instance, your experience is that intervention A leads to >90% mobility elbow post-surgery in 90% of patients and intervention B leads to 80% mobility in 90% of patients.

What epistemological techniques do you use to distinguish intervention A from intervention B and how do you distinguish your experience from the paper that shows the opposite result?

The number of variables are large: including, distressingly, your skill at technique A and technique B. Is the difference in the outcome merely your lack of skill in one? Is the difference merely the lack of skill of the guys observed by the paper?

My parents are also experienced surgeons and the normalizing method they use is to share their findings with fellow surgeons. Patients where they work are much more amenable to having photographs of their surgeries used as discussion material.

Medical science is a primitive science and it is practised with primitive statistics, so most evidence is just missing. Lots of surgical techniques are discussed based on too little evidence with massive true error bars.


And what's puzzling to me is how so many people take Popper's theoretical discussions to be of such validity in the real world

The experimentalist world is messy and full of hidden dependencies.

(And to be fair it's usually the people with less experimental experience that take him the most seriously)


> You need just one contradicting observation to invalidate something

No, that's wrong. There are several reasons why a negative result doesn't necessarily disprove a theory or a hypothesis: measurement uncertainties/noise, experimental errors, software bugs, bit flips, ...


It's a very important observation you made about various theories. I find that in medicine, almost always counter evidence of an established belief is either completely ignored or explained way casually.


Reminds me of that poor guy who told surgeons they should wash their hands. They didn't take kindly to that.


I’d give the doctor above a little more credit than that, and also would expect the papers she dislikes to be far less impactful than that.

Clinicians I know who get frustrated with academic research that draws conclusions about clinical work are criticising papers that are more like this:

(I’m exaggerating for effect, but not exaggerating a lot)

“People think this clinical technique is effective. But when some academics with very little clinical experience performed a rough approximation of the technique, in the wrong area, with the wrong tools, in the wrong way, on a small number of randomly chosen patients, they found it performed no better than placebo. Therefore no one should use that technique on any patient.”


That would be Ignaz Semmelweiss. I think you are understating his actual life outcome for having challenged the medical establishment with hard proof.

Poor doesn't begin to describe it.

https://en.wikipedia.org/wiki/Ignaz_Semmelweiss


This is all true. Welcome to the real world.

> academic doctors HAVE to publish in order to advance their careers

Indeed. But the issue is what criteria will you use for promotion if not that? Don't even think of evaluating clinical results, that's even worse than what we have.

> This happens even at the best centers

This happens ESPECIALLY at the best centers. Because centers are in good part ranked according to citation counts.

> I don't know what the solution is, but I do know that whenever she has needed treatment for herself she generally ignores most of the research and talks to the most experienced surgeon she can find.

After 10+ years in the system, I'm in the same boat. And although I've given it a good thought, I don't see a solution either.


The truth this article touches on is that it's not even really about citation counts anymore, it's about dollars. Via drug companies, private grants, federal grants, any sorts of money coming in from outside the university or hospital. I've sat on promotion meetings where people aren't publishing, but are bringing in 2 million dollar grants and "it's all ok".

This incentive structure distorts research. What to do about it? I'm not exactly sure but having university administrations who see research as a means to make money, rather than an end in itself, is not the way.


> This incentive structure distorts research. What to do about it? I'm not exactly sure but having university administrations who see research as a means to make money, rather than an end in itself, is not the way.

A lot of the reasons of this distortion is due to the lack of funding from government in state universities which keeps going down as the cost of running a university keeps going up. Because of the rising costs, things get pushed to extremes and the accountants/administrators are put between a rock and a hard place.

Depending on the university in question, it might not be just about "making money" but just not shutting the university down and trying to swim upstream in a never-ending battle.


I agree that's part of it. But I've also seen university administration proudly let state governments know that they're weaning themselves off of state funding. That can be seen in different ways, but it's clear that they've internalized the message so much that they don't push back against it at all.

More fundamentally, I think the US really needs to step back to the basics and think about different ways research can be funded, and to what extent current funding systems map onto those different mechanisms. I think there's a lot of dishonesty about what is going on, in terms of how research is funded in theory versus what actual expectations or practices are, and what the consequences of all of it is.


That's also definitely true for universities with enough reputation and infrastructure to score large contracts/grants with private sector corporations (or mega US defense contractors).

I also know of a couple of chemistry professors and one is new and his PhD was overseas in Saudi Arabia and he basically said it was just large scale brute force data analysis and discovery of chemicals in Python with a small amount of lab work to prepare the datasets. Basically there's a huge pipeline for this sort of work to discover new chemicals in labs, get a PhD out of it, but it's all just sort of an engine that functions to brute force the chemical possibilities not real meaningful research. I never asked but I assume the results just get fed into a pharma company or something.

Then you have those chemists on the opposite end, ready to retire, who are working on cold fusion because why not it's interesting to study even if it goes nowhere.

Our whole society has kind of turned into a giant web of a machine that can't be untangled without ripping everything apart. Some of it is stupider than others, but most of it is kind of a bullshit layer on top of a bullshit layer. We obviously see that in IT as well, with marketing tools that are just wrappers for simple underlying tech that anyone could set up. Sometimes there is significant value added, sometimes there is none (or even negative value).

I don't know where I'm going with this so I'm going to stop typing.


I'm not so sure the costs of running a university are going up. Here is expenditure data from the ed department:

https://nces.ed.gov/programs/digest/d20/tables/dt20_334.10.a...

Expenditure per full-time-equivalent student in constant 2019-20 dollars is up about 10% over a decade in the instructional and student services categories. That's not that much. (The total number is going up, driven by costs in hospital services, mostly)


Put controls in place. Monetary openness. Every dollar accounted for.


That's possibly the worst thing that could happen to science. Academic research should not be an industry. The fact that it is now considered as such in the western world will be our downfall.


From the looks of it, it already seems like an industry even it tries to appear not to. Ofcourse not all sectors will have the issue, just some (medicine, high tech for example)

Putting safeguards in place is a good thing. It’s like traffic rules that people need to adhere to.


Nothing will ever get done, but at least the busywork will be well documented!


Stop trying to apply the 'free market' or capitalism to every situation. The problem is more fundamental than your field: As long as those economic tools are treated as sacrosanct, universal solutions by society in general, your field inevitably will suffer.

(They are very useful tools, but like very tool, not the solution to every problem.)


> But the issue is what criteria will you use for promotion if not that?

Dice roll? Alphabetical order? If it's gonna be arbitrary or close-to, it would be preferable for everyone if it didn't erode science as a side effect.


Would estimating the prior help? Eg this study claims the sky is red and most people estimate that is wrong. Therefore this study is likely wrong and needs a lot of evidence to overcome the prior.


Yes but the thing is… papers are counted, not read. They are published to be counted, not read.

Really.


How is ranking by clinical outcomes “even worse” than grading by quantity of papers published?


Because it pushes us to treat cases where we expect a good outcome preferentially, i.e. biases the case mix towards easy patients and increases inequalities in access to healthcare.


This is true in any field. Many publications are trash. It takes a lot of experience to be able to tell apart bad science from good science in a specific field.

Every PhD knows how to read literature. But throw them outside their narrow line of expertise, and they will not be able to tell apart a good from a bad paper. It's the same reason you cannot trust your FB Karen who found a publication that supports her conspiracy theory.

So yes your wife is right in looking for the most experienced person in the field for an opinion.


> never ending stream of terrible research

When I was in elementary school, I saw the same thing. People forced to write essays on things they were not necessarily good at, just in order to advance to the next year. The result, of course, was a crapton of half-assed essays, many of which were bullshitted together not long before the deadline. I sincerely doubt even half of them where in any way useful whatsoever.

At least in my school, few people improved their essay skills over time. Maybe they learned more technically impressive ways to bullshit, but their writing did not improve. (Some people's writing improved, but often they turned out to be writing things on their own interest in their spare time.)

I know at the time I said to myself, "This is dumb, but it's just school. It's not like this for adults, of course. They have their shit together."

Turns out many adults still do that!

This is bad on so many levels it's bordering on unintentional evil. Someone should start a revolution.


Jeez, everyone here at level X are a bunch of clowns winging it. It's not like this for level X+1, of course. They have their shit together. :-)

Then at some point it dawns that it's just clowns from bottom to top.


Exactly. That's the big mystery as far as I'm concerned: how the hell does the world work as well as it does, given how clueless the people running it are?

I'm starting to suspect it speaks to the adaptability of people.


Not knowing the medical field but being involved in research in technical fields before I can confirm the problem with the "HAVE to publish" phenomenon. I came accross sooo much long but barren papers that it was a tiresome and most times fruitless excercise just to follow proper scentific methodology in terms of being informed of related works that I am actually glad I do not need to live in the academic reasearch world anymore (I still do in industrial settings, "improperly", tied to developments). I also hated that I had to make up a quota even if the research did not yield interesting enough (part) results yet, figuring out something that could go into a paper, for the sake of writing a paper alone, not for communicating new knowledge yet.

Formality not content ruled.

(I just hope other places/institutions/fields do it better)


The pandemic was an interesting case study in this. In the sense that although the alternative medicine people who started writing about covid after the pandemic started were generally giving terrible advice, the alternative medicine people who were writing about how to survive a novel coronavirus epidemic before December 2019 were generally much more accurate than the mainstream medical people who only started learning about coronaviruses after the pandemic started.

It turns out, being genuinely interested in something when there is zero money to be made is a much better predictor of success than having formal training and insider status.


It would be cool if you could do an analysis of the alternative medicine people.


I question that Golang* is a good programming language but lots of expert coders disagree. While your theory has merit I wonder if at the top of surgery everyone has subtle and nuanced ideas to the point of when telling others of their experience that because these surgeons are so good their claims can’t be translated as easily as every expert has different ideas. I don’t doubt that there are some people as you say with a profit motive to bullshit everyone I just question the proportion given how bespoke being at the top of any industry might be.

* sorry Golang people and anyone offended by this it’s just an example


25 guys from IBM, 20 from Microsoft, 3 from Facebook, 10 from Google, 14 professors from different universities sit on a body that dictate how you're suppose to program. If you don't put 20 references in your source code to "The Gang of 4", you're credentials will never be enough to do anything but write doubly nested for loops.

Luckily the IT industry can just ignore them and demonstrate something better without bothering 6 other organs to check if they are allowed to build it.

But in truth i'm starting to worry about the Hollywood-NSA-Savethechildren 'consultants' influence on the laws that panders 'technical' solutions to make general computation a service instead of a human right.


Also if you are a effective surgeon, how much time you have or have to spend on writing and formulating scientific research? I wonder if there is addition of research to the old adage of teaching...


So much of what you use today and the modern world runs on research and the scientific process in general, warts and all. Your comment is hubris at its finest, and insulting to researchers and teachers.


Evidence-based medicine, more than the indeed important issues raised in the article, is mostly impaired by the absolutely abysmal average quality of clinical research papers. The absolutism of higher-ups combined with their unrivalled (in the scientific world) statistical ignorance is hurting science more than anything else. And there's a reason to that: MDs are not scientists. We're highly-skilled workers, and more of the blue-collar kind.


> Evidence-based medicine, more than the indeed important issues raised in the article, is mostly impaired by the absolutely abysmal average quality of clinical research papers. The absolutism of higher-ups combined with their unrivalled (in the scientific world) statistical ignorance is hurting science more than anything else.

I couldn't agree more. Throughout the pandemic there has been an infuriating pattern of terrible research coming from medical doctors, followed by dismissal of any/all critiques by people much more qualified to make these critiques, because "they're not doctors".

The average medical doctor has next to zero ability to evaluate a study, and is easily fooled by elaborate statistical techniques into concluding that up is down and left is right. Meanwhile, scientists and statisticians with enough experience to see the methological flaws are dismissed, and policy makers turn to MDs -- who are often wildly out of their depth -- for pronouncements on "science".


To do high-quality research you need science/statistic knowledge, and domain knowledge. No matter how good a scientist you are, data is really hard to interpret correctly without domain-knowledge


Yep. But all the domain knowledge in the world doesn't help you when you mis-apply a statistical analysis, use a bad experimental design, or don't properly control for confounders. These are most often the limiting factors in medical research.

Regardless, the bigger problem comes when we're told that we can't possibly be seeing that fatal flaw, because we don't have an MD.


Absolutely agree. Data do not have conscience. We cannot apply absolute data collectively to all individual human beings. Which is especially happening over two years now.


> The average medical doctor has next to zero ability to evaluate a study

Yes, but...

> dismissal of any/all critiques by people much more qualified to make these critiques, because "they're not doctors"

... unfortunately, no. While it's true that most MD clinical researchers are depressingly bad, it is also true that pure scientists who never set foot in the clinics are very limited when evaluating practical aspects. So the situation cannot be trivially solved by allowing scientists to do more. In my experience, the problem has no real solution right now and I expect we'll have to wait on technological progress to allow improvements.


Yeah, this is the really hard part about medical research. At the individual level for those who are really qualified to do it, the incentives lean in favor of practice, where your comp is 4x-10x higher


Exactly, yes, a hundred times yes! This is the major flaw in our system. But I don't see a way of rebalancing to saner priorities. And apparently, neither do our supreme leaders.


Don't get me started on the loans either -- sometimes those incentives aren't just incentives, but necessary outcomes given your financial state exiting med school. I know folks who emerged from residency (where they moonlighted too!) with 500 credit scores and debt that can be described as multiple new Ferraris. They'd laugh at the idea of anything but practice


> it is also true that pure scientists who never set foot in the clinics are very limited when evaluating practical aspects.

Yes, but most medical/epidemiological papers fall over under the lightest methodological scrutiny. Even advanced scientific concepts like "have a control group" is seemingly beyond the analytical capacity of a fair number of MDs. When you get into things like "comparing small effect sizes with confidence intervals", they're positively adrift. We really are talking about table stakes, here.

I'd love to live in a world where MDs were sufficiently knowledgable of experimental design so that domain expertise is the limiting factor, but I can count the number of such papers I've read in the last two years on one hand.

I could write a much longer post on the arrogance of researchers when interpreting medical data or extrapolating lab results into humans...but that's a different subject.


Any books so I can be better interpreting paper? Thanks


How about an example, please.


> We're highly-skilled workers, and more of the blue-collar kind.

While it’s a bit of a problem that sometime during the last century a skilled profession got equated with a research one (I remember hearing somewhere that a commercial pilot’s education costs about as much as a tenured professor’s, and it seems plausible to me), there’s also the fact that the people you call “workers” are more distanced from research than they used to be; my impression is that this is even stronger among engineers than it is among medical professionals, but I can hardly claim any deep knowledge here.

MIT is originally an engineering college, so is the École polytechnique, and the École normale is a school for bureaucrats. Cauchy’s seminal work on wave propagation was presented at the French academy, had an epigraph from Virgil (“how many waves / come rolling shoreward from the Ionian sea”), and was signed by “Mr. Augustin-Louis Cauchy, road and bridge engineer” (this being the title the Polytechnique awarded him).

It used to be normal (AFAIK) for articles in medical journals to be essentially “hey, look what a weird thing happened with this recent patient of mine”, and some of seminal (if not the most high-brow) mathematics journals started as “hey, solve this problem I have recently done / want to know the answer to ... if you’re not chicken to try”.

What happened to all of this was, in a word, specialization—necessitated in part by the exhaustion of low-hanging fruit and raising of epistemic standards, so not entirely pointless, but I have to admit the comparative conservatism of medicine in this regard always looked more hopeful to me than the, at times, raging anti-intellectualism of engineers or the fiddly, illogical, invented world of law.

Certainly medicine is more engineering than science in that it is driven more by immediate goals than by the pursuit of truth alone (not the only possible definition of science, but a useful distinction when it comes to the feeling of the field, cf the 50s adage about physics and military physics being like music and military music). And I certainly won’t say you’re wrong. But I’m sad, and I can’t help feeling the drive for industrialization has done a disservice to us, scientists and craftsmen both.


"raging anti-intellectualism of engineers"

What makes you think engineers are anti-intellectual?

Engineering is an intellectual discipline, after all.


> What makes you think engineers are anti-intellectual?

FWIW, I’ve encountered the thing I call anti-intellectualism here among experimental physicists as well, and even in some theorists and mathematicians as well, to varying degrees. I don’t mean being stupid or not using your brain for work; I’m sure some of the people I’m thinking of here are much smarter than me and/or have contributed much more to society than I ever will. I’m not thinking of all engineering or science people, either; the maths teacher I had in high school is perhaps the strongest counterexample of this stereotype I know.

What I mean is a certain lack of wonder and ... suspension of disbelief(?) when it comes to other kinds of intellectual pursuits (sciences, arts, humanities, philosophy, mathematics), like a feeling you’re doing Real Work while those other quacks do pointless things to amuse themselves, and are not even particularly successful at that. “Shut up and calculate”, if brought out among people who are past the basic level of understanding of the subject in question, is a manifestation of this (even though the originator of the saying in the context of quantum mechanics, David Mermin, is now a proponent of the “quantum Bayesianism” school of philosophical thought). Lest it seem that I’m dunking on physicists here, the “Two cultures” essay and Hardy’s “Apology” also contain some of this.

If you’ve read the book, the shortest definition is perhaps to say that my “anti-intellectualism” is the opposite of The Glass Bead Game, even if Hesse intended it as his anti-Enligtenment manifesto.

And I cannot say this sort of skepticism is always wrong—that way lies Fashionable Nonsense and New Age mysticism, and epistemic relativism scares me like little else. We do owe some advances in quantum foundations to New Age people, though, so sneering in that direction might not be right either.

(If you think that this anti-intellectualism thing and the negation of the hacker spirit seem remarkably similar, you’re getting it.)


That sounds more like intellectual snobbery and scientism rather than anti-intellectualism.

This is especially evident towards fields one knows little about, towards the so-called "soft sciences" and the humanities.


... And also towards fields that are very close, like the famous characterization of the appearance of group theory in particle physics as Gruppenpest. (Lev Landau was perhaps the last of the physics greats to share this view, but he still made it live on as the absence of a standard group theory course in physics curricula throughout the ex-USSR. Of course, the problem is not that Landau was somehow especially shortsighted, but that keeping a university curriculum essentially unchanged for half a century is stupid and that a single school of theorists, however mighty, should not be able to dictate the course of a field in the whole country.)

I know the term scientism came to be used as a pejorative in humanities discourse and consequently in journalism, but I’d prefer not to use it here, both because of the contemptuous connotation and because it was originally coined to mean something much more interesting: a counterpoint to a naive, maximalist sort of positivism, of attempts to dismiss any picture of how the world actually is as unverifiable and therefore the question itself as meaningless. (This is the pop culture impression of Popper, though AFAIU Popper realized the problem and said far more nuanced things than that.)

The argument went along the lines of, “if talking about the ‘real world’ is meaningless and science is exclusively about experiments in labs, then it sure is peculiar that those experiments appear to be so remarkably coherent, even with things outside of labs—almost as though there actually is something worth calling the ‘real world’ and that science is in fact about that thing?” Thus, scientism.

(Cf. Liberalism Unrelinquished[1] on a similar issue of terminology.)

[1] http://liberalismunrelinquished.net/


I disagree. Popper introduced the idea of scientism in a manner that comports with how it is used here.


I suspect the poster is referring to anti the type of Intellectualism which is pure theory (or similar) and not application focused.

Which I'd agree with - most Engineers tend to hate that kind of thing.


It's easy to find them. Let us call them out using the siren song.

Psychology and sociology are just as much sciences as mathematics, computer science, physics, chemistry, and biology.


I think antifragile theory is needed to really understand anti-intellectualism.

Engineers love to convince themselves that older and simpler is always better.

So, you might have someone who can solve a differential equation, but seriously looks down on anyone with an Alexa, and thinks we will never be able to transition to all electric cars cleanly with future tech.

They are sometimes biased toward their senses much more than numbers. An engineer could totally still say "Metal is always better than cheap plastic crap" and make a power drill with a solid steel case or something. The technical analysis only matters if you actually do it.

They can do tech on a personal level but they don't trust the large scale process of technical improvement.

The other commenter mentioned disregard for the humanities and I think they are related.

"My intelligence is more valuable than your education" is a common belief. People seem to see life and work not as a series of projects, but like a sport, where the goal is to test yourself in a pure way, without "cheating".

The humanities often don't have objective success criteria, they are seen as nonessential, and often have lower barrier to entry(Some people can learn to draw from zero in a few weeks, math takes months to years before you can do anything you can't do with an app). These people judge value by amount of challenge.

It just doesn't fit the "Living off the land, surviving on your own skill, being practical instead of academic, trusting your eyes not math" kind of paradigm as well.


"Engineers! Too dull to be artists, too stupid to be scientists."


And yet they know the difference between theory and practice.


In theory, there is no difference;in practice there is.


Are you an MD or scientist? Historian of science? It would be useful context for us.


What I think is hurting science more than anything else is people conflating academia with science. Academia, the process where ideas are evaluated on how popular they are, has been around for thousands of years, usually controlled by religions.

Science was literally invented as a reaction against academia, and the world was better for it.

I say this because while I agree with the notion of what you are saying, but you are making the same error.

Real, and amazing science is being done right now at Monsanto, at Intel, and at every medical company around the world. None of these problems you mention exist in this real world of science.


I 100% agree about MDs, but believe me I've seen plenty of garbage basic bio research done by people who took a pure PhD track and ostensibly passed a statistics requirement. So I don't think it's the only issue, even though it probably is a big one.


> ignorance is hurting science more than anything else

It hurts more than science, it injures, kills, maims, and blinds humans. These are people's children, mothers, and fathers.


Disagree. Think your standards are too high.


Your standards are too low. Human lives are at stake in medicine. Do you want a doctor treating you when they are misinformed by fake science?


I'm not sure that's any worse than a doctor misunderstanding real science.

Note that the root cause (unfamiliarity with the tools of science, especially probability and statistics) is largely the same.


> I'm not sure that's any worse than a doctor misunderstanding real science

It is worse. A highly experienced practitioner who would've done the right thing might get led astray by new bogus research. Precisely because most practitioners don't have the ability to judge scientific work on their own.


There's a difference between mistakes and being maliciously tricked into making those mistakes


Sure, there is a difference of intent, and a difference in motivating incentives, but the line is fuzzier than we like to think (eg. industries providing no-strings-attached funding to study everything but the area that has issues they wish to crowd out).


I want doctors to not treat studies like “science sez” and instead recognize that “Science is messy”. They should use their training and experience to heal, not the latest publication.


After reading this article I find only one problem with it, and that is the title.

The problem the authors have so perfectly characterized has nothing to do with the emphasis on evidence that medicine requires, but on the distortion imposed on the system by large pharmaceutical companies.

The reality of the obstacles standing in the way of rigorous testing for drug or therapy effectiveness does not make evidence-based medicine an illusion.

In fact, the authors actually make a strong argument in favor of evidence-based medicine, by stating the need to acknowledge these biases. And that the reason for all these biases is greed.


The article shows how what passes for evidence based medicine is an illusion.

The "evidence" is biased and corrupted by corporations, as are many of the people who evaluate it, set the standards, and make decisions.


I think you're misunderstanding the title. It's not that evidence based medicine has problems. It's that there are problems with how evidence based medicine is currently practiced which gives it the appearance of evidence based medicine while actually not being anything close, that is, an illusion.


Well, yes. My point was simply that the title misrepresents the article, because it is ambiguous. "The illusion of evidence based medicine" can be interpreted as a statement of how the aim of evidence based medicine is illusory. The short introductory paragraph, "Evidence based medicine has been corrupted by corporate interests [etc]" furthers this idea, having a somewhat definite tone, as if this corruption were fatal to any proper scientific inquiry.

The rest of the article dissipates this confusion rather quickly.

Stepping back a little, how would one react to a title such as "The illusion of medicine", making the same general argument?


Stop calling it greed. It's corruption and regulatory capture. Both of these, unlike greed, are problems that can be solved.


> It's corruption and regulatory capture.

Absolutely!

> Both of these, unlike greed, are problems that can be solved.

Hmm... I don't see how. Pharmaceutical companies (and all other corporations, for that matter) pay the lobbyists to write the legislation that is then duly waved through. This is to say the governance structure itself is captured.

In fact, I think it always was captured, but only that we have laboured erroneously under the idea that it was there to help us. And creating that illusion is the job of education and corporate media.

Until we are disabused of the idea that government is anything other than a parasitical, wealth extraction process, that no one has the right to forcibly take another's money (tax) and to do so means to be subjugated to a type of slavery, that legislation has very little to do with morality, that we are given our opinions first at school with 18 years or so of indoctrination which is then topped up via screens and newspapers - until we recognise that we are sovereign beings that will stand up for what is right and refuse what is wrong, we're not going to change much.


Why not? Greed is not just some intangible human feeling. It can take the form of corruption and regulatory capture whose motivation, in the case of large pharmaceutical companies, is profit.


Greed is human nature. Eliminating or at least minimising corruption is doable; eliminating greed is not, unless you're willing to start engineering humans.


One thing that has always stuck with me in regards to this topic is the lack of medical research in areas that need it but don’t get it because there isn’t any money in it. One area that comes to mind is feline urinary tract disease. Many years ago, I had a cat who kept coming down with this, and the vet(s) were at a total loss as to how this kept happening. So I decided to open up the literature and plough into it as a layman with no knowledge about animal medicine. Within the space of a single hour I quickly learned that many of the most common diseases facing animals have very little research behind them and a lot of unknowns. Which brings me back to one of the main points of the author. If the neoliberal approach to research is only going to focus on what is profitable to treat, then medical science as a whole has backed itself into a dark corner.


I'm sorry but veterinary medicine is an entirely different problem domain with little overlap beyond capitalism and mammalian biology.


No it is not. Maybe you have heard about animal testing before. If what you say is true we hadn't been using animal testing.


Animal testing is to test human drugs/treatments/cosmetics on animals to see if they are safe enough to test on humans.


A lot of basic research is also done in animals, it's a pretty important part of the process that led to many drugs being tested in the first place.

But it's true an understanding of the animal is usually treated as an unimportant tangent. Like you might introduce a mutation into a mouse gene that is analogous to a mutation thought to be relevant in humans and then look at how that affects some other biomarker of interest in the mouse. But you're not going to be studying some issue that arises naturally in mice. Hell you're only using the mouse because it's an established lab model, you don't give a shit about mice.

Also, all the mice are hella inbred and grow up in a sterile cage, so they're hardly even reflective of IRL mice. So even if someone did want to fund lots of research on mouse veterinary science, it would still be a long road (technically and culturally) to get that integrated with basic medical research.

I'm not saying this is the way it should be, but it's the way it is.


"animal testing" and "healing animals" are two diametrically opposite things


My autoimmune disease is likely caused by modern technical advances and is almost nonexistent in countries that haven't modernized. There's very limited research done on on fixing it and lots done on treating the symptoms. Each commercial I see advertising a new medication to treat the symptoms is like a slap in the face.

It's played a large part in being vaccine hesitant. My life has been severely impacted by "trusting the science". It's very likely the science that ended up negatively impacting me is not harmful for 99% of the population. We're trying to reverse engineer systems by clicking buttons on the UI instead of reading the code.


> My autoimmune disease is likely caused by modern technical advances

I'm absolutely certain that my autoimmune disease manifested itself later in life due to chronic stress at work (a modern advancement so to speak). While not the cause, it was certainly the catalyst. Now that the proverbial Pandora's box of autoimmune diseases has been opened inside of me, there's no closing it back up and I can only manage it to some degree.

> There's very limited research done on on fixing it and lots done on treating the symptoms. Each commercial I see advertising a new medication to treat the symptoms is like a slap in the face.

Yup, exactly this. You have to take things into your own hands and figure out what works for you in terms of management. Not many doctors are going to take the time to sit down with you to figure out the root cause(s).


What autoimmune disease if you don’t mind me asking?


I would guess Guillain-Barre, since that is triggered by some viral infections and also sometimes vaccination (perhaps the same underlying reason). It affects very few people after vaccination, but that information is not going to be much comfort to the people it does impact. It is one condition that is a known risk factor for certain vaccines (or of various viral infections). That it could have happened anyways, from an infection, is also probably not much comfort to those who end up with it.


Exactly. And this is true to other medicine not only one produced for felines.


> areas that need it

The premise of capitalism is that enterprising individuals and institutions will respond to the monetary incentive created by these needs.

Could it be that what is unprofitable to treat is maybe not that big of an actual need? Can I argue that I'm happy that there has been no research on feline urinary tract disease as long as human cancer isn't solved yet?


Putting aside cats, there are plenty of diseases that impact humans which don’t get researched enough due to very rare, e.g. <1 in 100 million people.

> maybe not that big of an actual need Not that big of a need to who? It’s certainly a very big need to the people who suffer from or are dying from rare problems.


> Putting aside cats, there are plenty of diseases that impact humans which don’t get researched enough due to very rare, e.g. <1 in 100 million people.

Sure, and is that a problem? Should we as a society not apportion medical research spend to the most impactful areas? I'm curious to what extent the misalignment of incentives is due to capitalism as opposed to the actual need being lopsided

> It’s certainly a very big need to the people who suffer from or are dying from rare problems.

I totally agree. At the same time society cannot put all of its resources in support of very rare cases at the expense of common issues of similar seriousness


Sometimes fixing rare, obscure bugs others haven't bothered with can lead to massive discoveries. Everybody could be banging their head into the wall because it looks most profitable, maybe you could be the one to crawl through the window instead.


Your premise doesn't really survive the fact that death by starvation, thirst and exposure are still a thing.


The best counterexample I can think of is the unwillingness of drug companies to produce new, narrow-spectrum antibiotics. We're facing a looming crisis of bacterial antibiotic immunity, we know that producing the aforementioned antibiotics will resolve it, but the low profit margins prevent pharmaceutical companies from doing research.

Capitalism pursues profit growth, not human need. There may be some correlation between these two forces but it's obviously not perfect.


There isn't money in narrow spectrum antibiotics, because there isn't need yet. We don't have infinite resources, so putting effort into a looming, but not yet existing crisis can very easily lead to worse outcomes.


Yes this is a more plausible example of needed regulation to align capitalistic incentives on societally beneficial outcomes (vs cat UTI research)


In my (admitedly idealistic view) research is something that a society should fund without the expectation of it always returning a ROI. I dont know what the minimum level of monetary support should be and am aware that it could lead to bad actors. But I dont think capitalism should be the primary motivator for deciding what path research should take. We can always research things in parallel. One might even help the other. But we wont know unless we try it


It’s tricky for both sides to go for a pure capitalist point of view:

- human cancer might never be solved if strong business entities were to rely on the prevalence of cancer

- death of cats might have secundary, tertiary effects that are not clear enough to push businesses to enter the market. We would be looking at the negative impacts without ever realizing what the cause is.


> Human cancer might never be solved if strong business entities were to rely on the prevalence of cancer

Given the amount of money someone with a patent on a cancer treatment or cure would make, it's hard for me to imagine that


You are referring to a technical solution, but the patented miracle cure could be limited to 1% of the patients and never make it to the rest, if for whatever reason it didn't make economical sense to do so. It's a bit of goal pushing, but I wouldn't call that "solving" cancer.


There is a bottleneck in the number of MD positions. Without the MD title, your typical life science researcher cannot easily carry out medical research. Medical education and training has a scale problem.


It's possible to get closely involved with this kind of research with just a PhD, but there are all kinds of issues with that career path too.


To conduct good clinical research, I think you need to understand the constraints of clinical practice. It's not only a matter of holding the title, but more of practical experience.


You do need someone with the clinical experience to understand the questions that need asking and to confirm the trends and suspicions seen in clinical practice, but clinical academics are expensive, and also terrible at stats and good research practice. I think the right mix of both is important.


As someone who recently complained to the UK Royal College of Veterinary Surgeons and got palmed off because the UK legislation doesnt quantify "reasonable care", the whole system is one giant authoritarian money making scam.

I dont know what professionals are being taught in med/vet/law school now a days, but technology is increasingly demonstrating that professionals could be out of date by the end of their course!

So that problem of trying to stay current in IT is spreading out into other professions and I dont see them tackling this problem like the IT sector has/does.


> As someone who recently complained to the UK Royal College of Veterinary Surgeons and got palmed off because the UK legislation doesnt quantify "reasonable care",

Pretty hard to quantify. "Reasonable" is a widely used concept in the law for this reason. What were you expecting them to do? How would you define "reasonable care", exactly?

> I dont know what professionals are being taught in med/vet/law school now a days, but technology is increasingly demonstrating that professionals could be out of date by the end of their course!

Surely nobody ever thought otherwise. Doctors aren't superhumans, none will ever know everything and there are good and bad ones. Exactly the same as any other person in any other field.

> So that problem of trying to stay current in IT is spreading out into other professions and I dont see them tackling this problem like the IT sector has/does.

The IT sector does? How? I thought IT was pretty mediocre at it. And the challenge of staying current with new developments predates the computer by a few millennia doesn't it? This isn't something new to IT.

Have you actually looked to see how the medical field tackles this problem? I'm not in the medical field but some family members are. The amount of training, certifications, exams, etc they are required to do is staggering. Far more than any software developer I've ever known.


All fair points, but medicine is highly cautious, and most would prefer to go with the larger entities opinions/research. So Merck (https://en.wikipedia.org/wiki/Merck_Veterinary_Manual) have a massive influence which introduces lag, although some do read pubmed to access more up to date studies, but (google) scholar can be a better search engine to use over pubmed. I'm also aware of the Eli Pariser Filter Bubble which will affect what the vets see and what I see. I also have to do repeated searches sometimes over a few weeks, months or years to see what the search results are throwing up which it didnt previously.

> Have you actually looked to see how the medical field tackles this problem?

Yeah and they dont tackle the problem to my satisfaction instead using older technology/medicine. No one wants to put their neck on the line because they dont want to look at human studies where biological similarities exist, or different species are used, like the good old rat.

So some in-vitro studies exist because its impossible to study in-vivo, the tech just doesnt exist for in-vivo and that is an area where they are highly cautious and there does seem to be a general dislike for using non patentable chemicals!


Medical industry is corruption prone.

Another example: Industry sponsored conferences. Doctors are invited to a 3 day 4 star hotel stay to attend a conference. Technically, talks that are paid shilling are specially marked as sponsored talks.

But are the other doctors, who are getting paid to speak there 100% independent?

Will you be a 100% independent doctor after getting free presents?

Maybe you will. But industry probably found its profitable if they spent millions on these kinds of conferences.

Also lie repeated multiple times makes you believe in, there are psychological studies. So when you spend hour listening how drug X is gold, you might start believing its gold


My wife’s a doctor and the “free presents” she’s gotten from conferences are laughable. A 128mb flash drive and a canvas bag are about the best things I’ve seen.

She’s also never gotten a free hotel stay at a conference unless her group was paying for it.

She tells me there are strict (very small) limits on gifts from drug companies.


True... but she's not a famous tenured professor, is she?


What does being a famous tenured professor have to do with the OP’s claim that doctors attending conferences were getting free gifts.

I’m not talking about the separate claim that speakers get paid to speak.


Well, that's precisely the crux of the matter. When I go to conferences, I also only get a measly ballpen with a notebook if I'm lucky. Corporate financing is mostly targeting established researchers who, crucially, hold influence over their peers. It's just like YouTube influencers. So the question is not what we get at the conference, but what arrangements are made for the funding of future projects for those researchers. And conferences are great locations for interfacing well-known researchers with industry representatives. That also happens elsewhere, of course.


Again this is a different thing than “doctors go to conferences and get free hotels and gifts”. If you think researchers (the majority of whom aren’t doctors) might make shady deals with industry reps at conferences, I don’t have any evidence they don’t I guess. I don’t have evidence that industry reps don’t just call them on the phone and offer bribes either.


>Will you be a 100% independent doctor after getting free presents?

>Maybe you will. But industry probably found its profitable if they spent millions on these kinds of conferences.

So... theoretically the industry putting on the conference is also pushing the state of the art pharmaceutically and believe that they have an improvement on the existing standard of care. I don't think it's got anything to do with the presents as much as the message.


Nah. Speakers fees are a way to launder direct contributions. There was at least one famous case where the "conference" was done in one of the speakers' living rooms. Imagine getting a speaking fee for speaking extemporaneously in a Florida hotel room with two pharma reps and one other doctor five minutes before you all leave for golf.


>Nah. Speakers fees are a way to launder direct contributions

The comparison to money laundering is specious, IMHO.

In the US, at least, there is a database[0] of any value (including monetary and in-kind) exchange between doctors and medical companies.

As such, there is no "laundering" at all. All moneys and in-kind (hotel rooms, meals, swag, etc.) payments are documented and detailed.

Is there an incentive for doctors to favor a particular pharma or medical device company based on those payments? Perhaps. But since (again, at least in the US) such payments are documented and publicly available, it isn't some sort of secret set of payments designed to surreptitiously co-opt doctors.

Sure, some payments (my brother, a physician, received ~US$20,000 in 2020, mostly (~$13,000) from a single consulting fee.

Is my brother favoring the company who paid him $13,000 in 2020? Maybe. But if and only if their products/devices have clear beneficial effects over other products/devices.

What's more, that $20,000 (aside from the consulting fee, it was food/beverage and other in-kind stuff) is a small fraction of his annual income and doesn't make a significant difference in his quality of life.

As such, while there may certainly be doctors who are co-opted/corrupted by medical companies, assuming that the majority of doctors are swayed by such things is iffy at best.

[0] https://openpaymentsdata.cms.gov/


Seems very unlikely to me that it would not be a combination. We have both rational and irrational parts, conscious and unconscious parts.

I'd venture one does not have to look far in the anthropology literature to find good evidence that gifts just like other favors serves a function in building reciprocal relationships. Since it is likely so fundamental, I have a hard time believing it would not have any effect on the independence of the doctor.


I don't disagree, but I think it's likely only so much as to make them more receptive to the message of superiority.

I've actually been wined and dined quite a few times at pharma-rep presentations as a guest of an M.D. so it was interesting to see the process. In general.... it's definitely a sales thing, but the message wasn't ever "Here's how much money we'll give you" and always was "Here's why you should be prescribing this new drug to patients". Typically it would involve a presentation that first offered a bit of review of underlying mechanisms and disease processes, then high-lighted the need (how, why, and how severely condition X leads to bad outcomes and why it should be addressed aggressively in patient population Y, and lightly considered in population Z, and isn't needed in A), then discussed the current standard of care (Options B, C, D, etc.), then discussed this new drug (How it works, Why it's better than existing options and to what degree, Side effects, contraindications) and then a nuanced discussion of weighing issues related to the drug (e.g. it's excreted through kidneys... is untreated condition X worse in renal patients than treated X with sideeffects? ) etc. and a long Q & A session. Rarely if ever were costs discussed except perhaps whether it would be covered by insurance carriers.

Everything was pretty factual (as far as I could tell) and to the point and aimed at treating patients better.

Now.... the wining and dining I think definitely could make humans more receptive to the message, but the vibe wasn't at all that of an exchange. They just needed to do something to get the ears of the M.D.s, and so treating them to a nice fancy dinner with some guests allowed was a way to do that. A nice gesture, but not nearly of an order of magnitude where someone with Doctor earnings would even remember it too much.


Many states have banned those kinds of pharma lunch and learns and learns now. My wife says they basically don’t happen anymore (at least in her circles).


"Even $20 meals can sway doctors, study finds"

https://www.sfgate.com/health/article/Fancy-meals-can-sway-d...


> Medical industry is corruption prone.

Corruption is incentivized in any for-profit industry.


And non-for-profit industries


And groups of people who are driven by ideology, rather than profit.


profit IS an ideology.


Fine. I'll rephrase it:

> And groups of people who are driven by ideologies other than profit.


This essay about an American Psychiatric Association conference, with the pictures of all the ads plastered everywhere, was kind of fascinating: https://slatestarcodex.com/2019/05/22/the-apa-meeting-a-phot...


Drug company bias is definitely a well known issue. But there are even bigger issues with science based medicine.

* MDs are not PhDs for one, though people often equate the two.

* Things like surgery is in fact hard to test. There are ethical issues around it. E.g. are you really going to do a shame surgery "for science" if you think there is a chance it might save the patients life? On the flip side, surgery has a lot of risk, and performing surgery that doesn't work exposes patients to unnecessary risk.

* There are entire fields where pseudo-science is common. Notably a lot of physical therapy and chiropracty is not science based.


On the difficulty of testing lifesaving treatments, there's a great paper on this topic:

"Parachute use to prevent death and major trauma related to gravitational challenge: systematic review of randomised controlled trials" [0]

It has some great gems like:

  - No randomised controlled trials of parachute use have been undertaken

  - The basis for parachute use is purely observational, and its apparent efficacy could potentially be explained by a “healthy cohort” effect

  - Individuals who insist that all interventions need to be validated by a randomised controlled trial need to come down to earth with a bump
[0] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC300808/


There was, subsequently, published a Christmas BMJ article reporting the results of an RCT of parachute use jumping from planes. Apart from a few minor confounding problems with selecting the participants, it's very good science! Found no evidence of benefit, sadly...

https://www.bmj.com/content/363/bmj.k5094


So funny!

> RESULTS Parachute use did not significantly reduce death or major injury (0% for parachute v 0% for control; P>0.9). This finding was consistent across multiple subgroups. Compared with individuals screened but not enrolled, participants included in the study were on aircraft at significantly lower altitude (mean of 0.6 m for participants v mean of 9146 m for non-participants; P<0.001) and lower velocity (mean of 0 km/h v mean of 800 km/h; P<0.001).

> CONCLUSIONS Parachute use did not reduce death or major traumatic injury when jumping from aircraft in the first randomized evaluation of this intervention. However, the trial was only able to enroll participants on small stationary aircraft on the ground, suggesting cautious extrapolation to high altitude jumps. When beliefs regarding the effectiveness of an intervention exist in the community, randomized trials might selectively enroll individuals with a lower perceived likelihood of benefit, thus diminishing the applicability of the results to clinical practice.


Over the years I've practiced medicine the idea of "evidence-based" care has had its ups and downs.

Part of the problem lies in the question of what is "evidence". From what I've seen, heard and read, the question is broader than pharmaceutical research though that's certainly an important element. The evidence literature spans making diagnoses, applying drug and non-drug therapies, deciding what is a successful outcome of treatment, etc.

What I see is that clinicians are quite willing to use relevant and effective evidence-based methods. One problems applying evidence to real-world practice arises out of the aggregate bases of evidence. Even if evidence-based approaches could work in typical cases, many patients vary enough from norms that following evidence-based recommendations doesn't work.

Furthermore it shouldn't be surprising when low-quality evidence gains corporate or governmental backing. When such promoted "standards" defy clinical observation it tarnishes the idea of evidence-based practice. An example is fairly recent recommendations that men over 70 shouldn't have routine PSA lab tests. Urologists will tell you about the hundreds of men they see with early stage prostate cancer detected by periodic PSA tests. Following the "evidence" would result in far more deaths and disability so it's ignored.

As the article points out corruption of "evidence-based medicine" has destroyed much of the promise the idea once held. Insurers, pharma houses, governments contribute to abuse of evidence to serve their own agendas. No doubt some fraction of practitioners have added to problems and that needs to be halted. But IMO regulators are quick to point fingers at all clinicians for failures of health care systems. Rather most providers would prefer to avoid corrupt "evidence" whereas real evidence is welcome to workers toiling in the trenches.


Thanks for sharing your perspective.

> An example is fairly recent recommendations that men over 70 shouldn't have routine PSA lab tests. Urologists will tell you about the hundreds of men they see with early stage prostate cancer detected by periodic PSA tests. Following the "evidence" would result in far more deaths and disability so it's ignored.

Isn't that how any system works - the rules provide guidance (with some teeth), and the people working hands-on apply their judgment? I recently was reading about the US Army's process. Many think of the military as highly regimented, but the US Army encourages - in fact, requires - subordinates to ignore orders that don't apply to their current situation; they are required to fulfill the 'commander's intent'. Only broken, authoritarian or totalitarian systems try to apply the theoretical centralized control to practice, where it is so rigid that it is bound to fail at scale (like another army in the news, apparently). What central authority is intelligent enough, has the information for, and the vision for anticipating every situation now and in the future, knowing the best solution, and writing rules for it?


Yeah I spent a couple of years in the US Army, albeit a long time ago. There were many rules and directives, a lot it was incredibly picky with minuscule impact and soon forgotten. There was an art to divining what was really important and what wasn't.

In medicine today it's sort of similar, a lot of rules are propagated by bureaucratic minds. Sometimes it's just annoying, other times a big headache. On rare occasions such rules actually makes sense.

The problem with rules like the one I cited is the risk that it will be enforced. This can happen in institutional settings like large clinics, a health care system or government agency.

No set of rules or guidelines will be airtight and completely defensible. Yes, we're duty bound to act on the basis of reason and good judgement. However when rules are generated that are out of touch with reality there's a risk that regulatory entities, insurers, etc., will pick up that ball and run with it.

Impediments to practicing according to one's best medical judgement are ubiquitous. One is insurance company rules requiring preauthorization for medications, even many generics. Can't tell you how many hours me and staff have spent knocking this around. Of course delayed treatment affects patient well-being. Ironically 95% of the time the meds I intended to prescribe were eventually approved anyway. Having to play the game accomplished nothing of value for anyone.

You ask "What central authority...?" Of course, central authorities don't know the best solutions, often enough not even passable solutions to problems. Too many rules are as dangerous as not enough rules. I believe developing rules via "bottom-up" methodologies would produce rule sets with minimum necessary restriction and conceivably could even be productivity enhancing.

I'll be quiet very soon, but I'm reminded of a patient encounter. A man was in my office talking about conflicts at work re: a micromanaging supervisor. It was a group of PhD-level software engineers. I was thinking, do these people even need "supervision"? Shouldn't they be trusted? Trustworthy professionals can be relied on to do their jobs. Sure, keep an eye on work quality, allow the track records to speak for themselves.


The problem is our politics - clearly our health care system needs fixing. But nobody chooses to act on politics, and the obvious obstructions to progress.

> I'm reminded of a patient encounter. A man was in my office talking about conflicts at work re: a micromanaging supervisor. It was a group of PhD-level software engineers. I was thinking, do these people even need "supervision"? Shouldn't they be trusted? Trustworthy professionals can be relied on to do their jobs. Sure, keep an eye on work quality, allow the track records to speak for themselves.

Generally, yes, good management is knowing who to trust, and putting them in a position to succeed. But the original bias, of course, is toward ourselves. They/we are still human, we still put our pants on one leg at a time. Just because we are highly skilled in something, doesn't mean we have good judgment in other ways, we are motivated, not distracted, biased in other ways, etc. Plenty of very successful people do god-awful stupid things.


No question about it, no one is immune, "to err is human". Also true that assessing performance of tasks requiring complex judgement is itself error prone. Credentials alone are an insufficient baseline. So for all kinds of roles auditing mechanisms are necessary to monitor performance. And truly responsible professionals don't object to quality assurance measures, in fact they want to know if their performance declines in some respect.

In my observation it's problematic when rules become intrusive, overly restrictive, or needlessly punitive. At that point monitoring no longer serves its nominal purpose. There's a proper balance that's minimally adversarial, making it work requires mutual respect and constrained interaction between professionals and reviewers.

I think that's the source of a significant share of the problems in our health care systems. Way too much energy is wasted on battling phantoms. Not sure how the trends can be reversed. Best advice to patients is to make sure to strongly advocate for themselves. I know both sides as doctor and patient, I know how big a challenge it can be getting what we need despite layers of roadblocks and potholes on the path.


> The problem with rules like the one I cited is the risk that it will be enforced. This can happen in institutional settings like large clinics, a health care system or government agency.

Ha! I thought you were saying the exact opposite! That urologists, following their "evidence", will do too many tests and cause many people to suffer from false positives.

But you were actually calling the guidelines "evidence" and saying they were the problem?

Maybe! But in that case why are the guidelines so wrong?


This should not be considered evidence that other quack medicines have any superiority over modern medicine.


True. It is just evidence that sometimes modern medicine doesn't have any superiority over quack medicines either.


science is a pretty long term process. in practice results and consensus stabilizies after around 50 years or so.

the reason it takes so long is that the turnaround time for a single result in science is half a year to a year. imagine discussing something with someone and every sentence you speak could only be uttered after a year.

articles like OP always hinge on some short or medium understanding of science. they try to point out that the whole approach is wrong based on the intermediate results of the process. OP itself is part of this process and the reason why the results will be better and more stabilized a couple of decades from now.


> consensus stabilizies after around 50 years or so

One other reason for science being such a long-term process is Planck's Principle:

https://en.wikipedia.org/wiki/Planck%27s_principle

Basically, scientific innovations spread by the old orthodoxy retiring/dying and replaced by successors who are familiarized with the new ideas (and then proceed to form the new orthodoxy).


But there is also the social problem. Incentives and power structures matter, sometimes to a comical degree. Openness and collaboration are fundamental to progress.


Massive, major flaw in their argument. "without seeing the raw data" is not true. At least in the UK.

All raw study data, published and un published must be provided before a medicine will be reimbursed. It is reviewed by teams who do their utmost to prove the med is worthless.

Same for the FDA afaik. All the data is generally "commercial in confidence" so any old twitteraty can not download and misinterpret it.

All trials must be registered on Clinicaltrials.gov or they cannot be used as evidence.

Publication bias is a well know issue, resolved for more than a decade. No idea how this article made it into the BMJ, smells like click bait.


"All the data is generally "commercial in confidence" so any old twitteraty can not download and misinterpret it."

This is totally the wrong attitude. Everyone should be able to look at the data. The established authorities are absolutely capable of totally misinterpreting data. The only hope is to let other people look at it.


In theory I agree with you.

In practice, Ive seen too many bad faith actors intentionally drawing false conclusions and then linking to data they say agrees with them that clearly doesnt. Twitter, wikipedia and facebook are terrible for this behaviour, many of the so called fact checkers just as bad.

Healthcare trial data analysis is a highly specialised skill, (last one I did took about 3 months of meetings just to understand the dataset, about 6000 columns of abbreviations of complex medical terms) those with the qualifications to analyse it are typically not refused access if they actually want it, something like GPRD is generally more valuable and available to everyone.

In Sweden everyones entire medical history is open access for research.

There isnt any shortage of data, there is an acute shortage of analysts and finance to cover the costs involved.


I have seen many studies where they say that qualified investigators can have access to the data if they sign lots of forms, agree to collaborate with (not just acknowledge) the original authors, and wait six months for this all to be approved. I expect that I would be approved if I did all this, but it's not worth it. In some cases I suspect the study is flawed, but debunking it months later would be a lot of work for little return. In other cases, the study looks OK, but without ready access to the data to confirm this (and perhaps get more information with a more sophisticated analysis), it's really pretty much worthless.

Yes, worthless. As in no one should put any weight on such unconfirmable "evidence". It's just wasted effort by the investigators, who piddled away their time (and their funding agency's money) because they're not willing to release the data.

These are not studies where one would need 3 months to just understand what the variable names mean - perhaps the data was extracted from such a huge database, and perhaps there are issues with how that was done, but there are also quite possibly issues with what they did afterwards, which can be examined separately.

Comments by unqualified people on twitter are a fact of life. You can't get rid of them without also eliminating comments that reveal mistakes (even fraud) in what superficially seems like authoritative research. Your judgement that such uninformed comments are intentional lies is probably untrue in most cases.


>Yes, worthless. As in no one should put any weight on such unconfirmable "evidence". It's just wasted effort by the investigators, who piddled away their time (and their funding agency's money) because they're not willing to release the data.

And UK regulators definately do not. If they dont have sufficient data to be confident in the decision they make collecting more a requirement of reimbursement.

Here is an example of the guidelines

https://www.nice.org.uk/process/pmg20/chapter/reviewing-rese...

Taking only a single trial is worthless. Or rather about as useful as taking a murder suspects mothers testimony. Evidence based medicine is simply not broken by not handing out complex, expensive and commercially valuable data to everyone who would like a copy.

Any more than the court system is not broken by keeping certain evidence from a jury.

>Your judgement that such uninformed comments are intentional lies is probably untrue in most cases.

Oh they definately are. Just look at the recent data on covid vaccines published by public health wales to see how bad that situation currently is.


> Publication bias is a well know issue, resolved for more than a decade.

How is that resolved? This is not what I see in my meta-analyses.


requiring registering with clinicaltrials.gov and regulators doing their own analysis on all the data.

Managed entry agreements and conditional reimbursement (dont pay for patients that dont do better than the average of current practice)

Requirements for "real world data" collection and submission post approval.

A whole host of other mechanisms I cant remember off the top of my head.

Also

"evidence based medicine" doesnt mean the evidence is infallible (e.g. rare disease meds have a huge issue with collecting any meaningful data) - it just means actually using evidence rather than just taking the industries and acedemias word for it that it works like the old days.


From personal experience, I think you are massively overestimating the reach and effectiveness of regulations.


I've worked with nice. several of my students still do. We've kicked out of the NHS quite a few meds the industry and academics were wrong about, based on the evidence, not what the industry/academics said before they were shown to be not correct.

The whole posit of this article is wrong. "acedemics said so" is not evidence based medicine, it is one small facit of it, and the author matters, no one trusts "sponsored by" papers, NIHR is constantly publishing counter evidence. Some of that even ends up in court (e.g. Avastin)

https://www.reuters.com/article/us-novartis-bayer-britain-id...


This I totally agree with. I've not much experience with the UK system, but I disagree that publication bias is solved, at least regarding the scientific literature at large.


Well. I guess you could probably pin some blame on publication bias for the massive misinvestment that went into the recent dementia meds that all failed at phase 3. Something made them get the science badly wrong.

I was talking specifically in the context of evidence based medicine for regulatory decision making, which is the process that replaced blue envelopes full of money or throwing acedemic papers at an important politician whose relative had the disease that came before it.


Do these issues not still apply? Have they been fixed?

https://en.m.wikipedia.org/wiki/AllTrials


They are "fixed" by extending the evidence base for decision making well beyond raw clinical trial data and publications on them (none of which ever translates into real world practice outcomes anyway).

In important instances (such as expensive meds going to large numbers of people) the payer will commission their own research e.g. through NIHR https://www.nihr.ac.uk/

There is a whole field of "value of information" now to estimate how incorrect information would affect a decision and whether it warrants research investment (which covers publication bias - e.g. how many unpublished studies showing no or negative effects would there need to be to change the decision)


One of the issues I’ve seen with EBP is that it’s often not necessarily science based. That is to say, prior plausibility derived from basic science isn’t fully incorporated, ergo undo weight is given to noise. You see with in almost any study of pseudoscientific modalities, where actual efficacy would require a drastic reformation of our fundamental, very well tested models of the natural world. Why should a particularly strong one-off result for my made up pain treatment be afforded the same stature as that of an pharmacological agent with clear physiological pathways?


Evidence-based medicine does incorporate prior evidence for causal evaluation. Check out the "Bradford criteria". They're the basis of causal evaluation in epidemiology.


The problem is misaligned incentives.

You need to go after the "research trolls". Same as you would with patent trolls. Hit them where their money is.

Pass a law that anyone anyone ultimately funding or profiting off research can be sued by the public for double damages x cost of the research if : - paper won't replicate - paper sponsors & costs are not disclosed. - paper does not include instructions to make data requests - researcher does not release data within 30 days of request. - paper publisher (whomever that could be, even faculty) failing to flag prominently within 30 days, any paper that has lost in court.

So if a university employs 1 researcher at X salary, and time spent on research is Y, the university can be sued for Y * X * 2 by Joe Schmo. Maybe this would be small claims court. Same would apply to any research created by big AG, Pharma, etc. If researcher uploads paper in his own personal website and forgets to flag it as flawed, he now pays the price too.

It would be necessary to cap disgorgement at 10x initial research cost. The point is to let the crowd hunt down garbage research, not bankrupt anyone for honest mistakes. I would also default to making the defense be rewarded by 2x incurred legal costs if they win case to ensure lawsuits have some merit.

This would immunize students if they are not paid to make research (but not PHDs that live off stipends), or random people doing research, out of goodwill.

Professionals would be very careful to publish garbage lest be fired from their jobs, and companies & researchers would think twice before pushing propaganda. No more research mills of BS.


There's so much money in cancer research - when I was a pharmacology/genetics researcher I was told to try to tie everything back to cancer for research papers and grant proposals.

And the oncology lab next door had the best equipment. We always had to borrow their stuff.

Idk maybe this is a good argument against private charity. Not to say that cancer research isn't important but from what I could tell, cancer researchers had an embarrassment of riches.


I think profit motive and regulatory capture are real problems but I don't think they're the only problems and maybe not the biggest.

For example, the FDA is generally agreed to have dropped the ball by dragging its feet on approving covid tests. I don't think profit motive or regulatory capture explain that. What does?

More transparency (as the authors recommend) sounds great but giving more power to regulatory bodies is no panacea.


> For example, the FDA is generally agreed to have dropped the ball by dragging its feet on approving covid tests. I don't think profit motive or regulatory capture explain that. What does?

I'm no fan of over-regulation, but the FDA finds itself in an impossible situation: approve tests quickly to slow the spread of Covid, only to later find that the tests don't work, or test the products rigorously, while patients suffer.

It's a very difficult balance to strike, and I would argue they've done as good a job as one could hope.


Big Pharma derives its oligopoly from the barriers to entry that the FDA imposes. Presumably they take every opportunity to lobby for more rigorous extensive testing. Not because they want to pay more to run tests, but because they want their smaller competitors to pay more to run studies for drugs that might compete.

This is, of course, not an explanation for how they have variously captured the components of the FDA, but it does explain how such a capture is an explanation for foot-dragging.


It's not just corruption, it is an idea that seems very correct, but which is subtly wrong.

Take a substance like chlorine. It is put into municipal drinking water because it kills things - things that can kill us or harm us. It is credited with making municipal water a lot safer.

It has been shown to be safe and effective for most people.

The key is most people. Some people, at the edges of the bell curve, cannot tolerate chlorine. If they drink water with it, or bathe in water with it, it harms them. I have family members who have this sensitivity. Since moving to a place with well water, their health has improved immensely.

If you were to be guided only by studies that show the safety and effectiveness of the chlorine treatment, then you could overlook the fact that some people are outliers.

Evidence based medicine will guide you to things that will work for most people in a given situation. However, for a specific patient, you may need to depart from that evidence.

Limiting doctors to only evidence based approaches prevents them from treating specific patients.


The EPA guidelines limit chlorine content in potable water to 1 PPM. That is well below the level at which even the most sensitive person is affected. The problem is this guideline has no legal enforcement mechanism, so whether or not your municipal water district follows it is up to them. If they're not, they're not practicing evidence-based anything. They're ignoring evidence-based guidelines for some other reason.

Whether any US water district is really doing this seems debatable anyway. Maybe this is just a consequence of the poor state of search engines, but every source I can find claiming specific US water supplies contain unsafe levels of chlorine seem to be exclusively limited to companies trying to sell you expensive filtration systems you almost certainly do not need.


That some people are sensitive to chlorine should then be scientifically studied, to expand the knowledge on that topic. Counting on individual practitioners to cook up artisanal individualized treatment on their own is both unscientific and the antithesis of what you (and everyone) want.


Where did I say a practitioner should "cook up artisanal treatments"?

That's a strawman, for sure.

Doctors have a lot of education, and gain a lot of experience over time.

What I pointed out is that the studies are not enough. Where they leave off is where a doctor needs to be able to apply their education, experience, and knowledge of the case to make progress. Doctors are trained to apply scientific approaches to bridge the gap between the broad science and the individual patient. The big health co's do not like doctors doing this because it takes time and is expensive. So, they tie their hands because they only want the doctors to focus on the easiest fixes that solve the most problems.


I am a doctor. That's not a strawman at all.

>Doctors are trained to apply scientific approaches to bridge the gap between the broad science and the individual patient.

No. Really, I mean it. Doctors are trained in textbook approaches, not in the scientific method. The less creative, the better the doctor. That's precisely what evidence-based medicine is, and it's what allowed us to leave the Dark Ages of paternalistic medicine.

> The big health co's do not like doctors doing this because it takes time and is expensive. So, they tie their hands because they only want the doctors to focus on the easiest fixes that solve the most problems.

We tie our own hands. With guidelines. Yes, the system is tuned to a limited investment of time and money in each patient for distributive justice reasons, but guidelines are (mostly) there to discourage crackpot medicine.


Evidence is king.

But hyphoteticaly...

if I'm an outlier, and blind test is made on only me, and reveals that I am sensitive to chlorated water.

...I would count that as evidence based.


> Take a substance like chlorine. It is put into municipal drinking water because it kills things - things that can kill us or harm us. It is credited with making municipal water a lot safer.

> It has been shown to be safe and effective for most people.

And yet it wasn't until I was well into my 30s that I learned that not cleaning your shower head often enough or thorough enough can block the effects of the chlorine and spray aerosolized bacteria everywhere.

It doesn't matter what kind of medicine you have when there are plenty of problems that would be prevented if not for the obscene asymmetry of information and poor cultural health habits. But as long as big tech continues to believe that steering people toward political propaganda is what's best for society, that will be the narrative of the day rather than real progress.


And I was today years old (not much younger than you were)


What do you think evidence based means? There is evidence that some people are sensitive to chlorine. In an evidenced based approach it would be considered a possible cause until there is evidence to rule it out. If other issues present similar symptoms and are more likely then they should be tested and ruled out first.


Might be wrong, but I think they just mean that when you do large scale trials you aggregate over the population, and determine if the medication had a statistically significant effect. You can get evidence that a subgroup does not react well, but you might not, if that is not your focus.

A key concept and solution here is 'precision medicine', tailoring medicine based on more precise information about the individual.


I think you’re getting down-voted because yourcomment indicates you think this article says that doctors should only be able to use evidence-based approaches. That’s not at all with this article is about.


chloramine is replacing chlorine in municipal treatment.

I'd be amazed if somebody was really sensitive to chlorine, to the point where typical municipal water caused rashes. After all, all of us contain 80grams of it and consume 500mg/day.


It would appear that the authors are selling a book on this topic, which explains the BMJ article.

And I'm not sure their premise is all that accurate. If they think the FDA is in the pocket of pharmaceutical companies, that runs entirely counter to my experience. I can recall plenty of times where we'd suggest a change to a filing and the FDA response was "nope, changes denied". If you think the low level reviewers of submissions are secretly pulling the strings for Pfizer, you'd be wrong.

And in terms of the quality of publications - this is nothing new. There has always been a robustness to clinical trials. That's why the NCCN guidelines for cancer treatment actually rate the clinical evidence from Category 1 (strongest) to Category 3 (no evidence whatsoever).


Another great argument I have heard against EBM (or maybe rather against being religious about it) is that the studies and papers held to highest regard are made in circumstances that do not apply to most of healthcare: the studies might be done in the best hospitals like Johns Hopkins, with practically limitless human resources for the study, and very carefully selected patient groups. In a normal hospital situation you won’t have the time or the resources to dedicate the same care for each patient, and the patients might have other conditions (which would rule you out of the big studies).

Still, as a layman who is not a doctor, evidence based medicine sounds like a good idea, but as with everything, taking it to extreme is probably harmful


I have no idea if any of this is true, but it rings true and it reminds me of similar distortions of the integrity of scientific inquiry by the interest of industry, in my own field of study: artificial intelligence and machine learning.

And perhaps that's no surprise. It's the same universities that produce both kinds of research (and researcher) after all.


This is how dark ages start. Actual knowledge is watered down by corporate nonsense. Society loses confidence in the scientific establishment. The organizing coordinate that took center stage in the age of enlightment will be lost. Society still need some way to organize its cognition. Power and superstition will take center stage again.


This is how conspiracy theories start. Take a thing that has some truth to it and extrapolate it all the way up, dismissing that the whole system is probably more complex.


In france we have this research director, Didier Raoult, who pushed for the use of Chloroquine as a treatment against COVID. He was the head of a research center dedicated to tropical diseases. This guy literally argued that the smaller the test sample, the better the result. This guy lost his job of director (not to mention investigation of research fraud), but he mades it very difficult for uneducated people to trust the vaccine.

Not to mention that I dated a biology scientist and teacher who did not want to get vaccinated. I heard 2 other scientist lost their jobs in a research lab in my city because of not wanting to get vaccinated.

So no matter what people will tell me, it seems that science is rife with people who should not have their credentials, despite high level education, universities, "merit based" grades, etc.

It reinforce my cynicism, because I had some scientific education, but I dropped and never reached a degree. It's what Pierre Bourdieu talks about, that degrees are a way of maintaining social classes. It's exactly what you hear in the movie "Good will hunting", that a degree is just a piece of paper, while people can just educate themselves by reading science books.


https://news.ycombinator.com/item?id=30761070#30796020

I put that article three days ago here, isn't there any moderation for double submissions?


I've been more or less of this opinion since watching this talk years ago by Dr. Jason Fung: https://www.youtube.com/watch?v=z6IO2DZjOkY


EBM suggests that swimming in a heated swimming pool (controlled studies) can be the same as swimming in open sea/ocean (applying the result in any possible case)


The distinction you allude to is so well known and understood that there are not only papers about it, but also special words to distinguish them: efficacy (performance of an intervention under ideal and controlled circumstances) and effectiveness (performance under ‘real-world' conditions).

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3726789/

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3912314/


The problem is there is no EBM without p-values and there is no real world without Bayesian reasoning. So, the two are fundamentally disconnected


Science marches on one funeral at a time, medicine marches on one expired patent at a time.


No. For medicine, both are true because funerals of both patients and professors are a signal for progress.


Cursory reading is : EBM would be great, but we can't trust the "E" in EBM because people selling the "M" in EBM have a vesting interest in producing fake data to justify selling more to the institutions hoping to do the "B".

It seems like the author think more "independant" parties would help (regulators ? white-label labs to do the testing instead of pharma labs ?)

All I can tell from the last two years is that I doubt people would really trust a "government agency" (or, anyone "paid by the taxpayers") more than "the corporate" lab.

(Case in point: the EU agencies that banned or suspended or restricted usage of some COVID-19 vaccines, at various points in the last year, are still mistrusted on the ground that they did not ban or suspend _one_ of the vaccines - which happens to be the most popular - hence "they must be in the pocket of lab X".)


granted, there is the day-to-day human physician "bayesian" evidence-based-medicine...for better or worse...


Here in Québec the vaccine passport and the curfew was touted as the pinnacle of "following the science"(tm). Anyone contradicting their usefulness as effective sanitary measures were accused of "spreading misinformation"(tm).

The curfew was implemented twice. The vaccine passport was only recently removed.

Turns out, one FOIA request later, that they never consulted the data scientists of the public health institute. They made it all up.


There is one simple solution: replication. Stop complaining and start replicating.


We all live inside our own illusions.


>Scientific progress is thwarted by the ownership of data and knowledge because industry suppresses negative trial results, fails to report adverse events, and does not share raw data with the academic research community.

Not to derail the thread but this nicely encapsulates my reservations toward the COVID vaccines, and why I think mandates are egregiously unethical. The raw clinical trial data is still effectively a secret between pharma and the FDA, and both are under immense financial/political pressure to release a "safe and effective" product, not to mention the incestuous relationship between the ostensibly independent FDA and big pharma.


The original clinical trials are essentially irrelevant now for the vaccines, we have a lot more data from using the vaccines in the real world now. Of course that data is not randomized and double-blind, but because of the sheer numbers you can still do very good science on this. The original trials are also simply outdated because the virus changed, we're several big variants later now.

There are also more agencies than just the FDA that approved these vaccines, if you don't trust them specifically.

Publication bias is a real issue, and I'm annoyed that all efforts to reduce it seem to be rather ineffective. The ideas behind requiring registration of trials is good, but it doesn't do much if you don't actually police the rules. But the COVID vaccines got such enormous amounts of attention that it really doesn't matter in this specific case.


> Of course that data is not randomized and double-blind, but because of the sheer numbers you can still do very good science on this

But is the science being done "good" considering that the political and social pressures around reporting of negative results/adverse effects still exists? How do you get good data on adverse events if vaccine recipients don't even think to correlate strange symptoms x weeks or months post vaccination with the vaccine? How would you detect an autoimmune disorder with vague and nonspecific symptoms, for example, when doctors also are unwilling or unable to collect relevant data out of a combination of bias and ignorance?

It's a double whammy, because of the novelty we don't know exactly what we're looking for, and because of stigma/political pressure we are arguably not looking hard enough. Being labeled an anti-vaxxer is effectively career and social suicide, how many researchers are willing to stick out their necks, especially when rigorous proof would take years of research and funding to aggregate?


There was a very rare side effect in the Astra Zeneca vaccine that was discovered quite quickly. Not in the original trials as they are simply not powerful enough to detect side effects that rare, but in later observations. We also pretty quickly figured out that the risk is age-related, and many regulating authorities reacted based on that and adjusted the recommendations.

To me this is pretty good evidence that the existing mechanisms for detecting rare side effects work in general. One can argue about the details in this case on how exactly the authorities responded, and I think also reasonably disagree on how to weigh the risk/benefit in this particular case for specific age groups. But those are much more difficult questions based on inherently fuzzy data due to the low frequency of this serious side effect. But the detection itself worked well, and also produced very quick reactions.


There are lots of papers from researchers looking for possible side effects and then looking at population data to see if there was any change in the baseline rate that corresponded to it. It seems a pretty good way to get a publication out there to me, since it is both novel and relevant.


What? There was lots of work looking at side effects of the vaccines. The AZ vaccine was paused in several places whilst rare and unexplained blood clots were better understood. It was only because the huge numbers of recipients that such information was available. What point are you trying make? That somehow scientists are not willing to raise problems with the vaccines? That's demonstrably not true.

What is also demonstrably true is that the death rate from COVID is now negligible for those vaccinated. Might there be some super rare interaction we don't know about? For sure, but there's also as likely to be super rare interactions with COVID itself, or whatever crank treatment of choice replaces a vaccine.


To a point that's understandable, based just on trials it's reasonable to be skeptical of the results. However once the vaccines have been in mass distribution and there is copious evidence of their efficacy and the extreme rarity of adverse affects, I don't think it's reasonable anymore.

The mandates only came in once enough of the population had been inoculated to make a requirement possible. At that point the initial trials are a footnote compared to the results from mass vaccinations. IMHO it's just not reasonable for that objection to carry any weight. There may be other reasons, of course.


>and the extreme rarity of adverse affects, I don't think it's reasonable anymore

There's a major caveat here, sudden and obvious adverse effects are rare. The subtle and possibly long term ones are still unknown. It took some 5-10 years before Thalidomide was found to cause severe birth defects, for example.


It took 4 years for thalidomide until it was removed from the market. And this is the text book case that triggered more stringent regulation for drugs.


> The subtle and possibly long term ones are still unknown

So just say you won't take it and be done, because there is no mechanism that can satisfy you.


Thalidomide isn’t a vaccine so that hardly seems relevant. We have extensive experience with many vaccines across huge populations and no reason to expect these ones to be any more dangerous than the others. Also it seems somewhat unlikely that any possible side effects, if they were to manifest, could possibly be as bad as millions of deaths and tens of millions with long term complications .

The vast majority of people objecting to covid vaccines have no history of vaccine denial, and no good reason to suspect the covid vaccines are particularly dangerous compared to the many vaccines they have accepted previously without a thought or any interest in their history, testing or method of action. It’s politically motivated from start to finish and has nothing to do with the medical facts.


The problem with this line of reasoning is that there is a huge difference between controlled trials (which are obviously imperfect themselves) and post-hoc analyses of uncontrolled real-world outcomes. There's a big leap of faith in trusting our public health authorities' ability to accurately measure and faithfully represent the real situation.

Just to name one small issue, our only measurements of vaccine efficacy are case counts, covid hospitalizations and covid deaths. Each one of these measurements is confounded by population-level differences in testing rate, testing polices (ie routine testing on entrance to hospital), PCR cycle count, and many other factors. There is no longer a monitored control group, so we can't ever account for any group-level differences or confounds. I've seen no attempt to address these issues.

We also have no access to reliable data about confirmed adverse effects. A year later, it is still very hard for a person to quantitatively assess his/her own age-stratified risk/benefit tradeoff, even with the confounded efficacy measures.

So, given all that, why isn't skepticism reasonable anymore?


I really don’t see why there is a huge leap of faith with coronavirus vaccines, but not with the many safe, effective vaccines and medications we all benefit from throughout our lives.

Have you always felt this way about the medical profession? Have you any specific reasons to doubt the medical profession and it’s institutions, across many nations and accreditation agencies, now in particular?


>but not with the many safe, effective vaccines and medications we all benefit from throughout our lives.

Because mRNA is a novel technology and that people obliviously make this argument is a testament to the effectiveness of the "safe and effective" propaganda.

>Have you always felt this way about the medical profession? Have you any specific reasons to doubt the medical profession and it’s institutions, across many nations and accreditation agencies, now in particular?

There were numerous reasons to be skeptical before covid - regulatory capture and the replication crisis in particular. Suddenly bringing these up gets you branded an anti-vaxxer. Wr shouldn't be blindly trusting our modern institution, it has strayed increasingly far from clean science over the years, now you have influences from industry and unrelated politics, in addition to the pressures that come with sticking close enough to orthodoxy to maintain a career and receive funding for grants.

When I was younger I also had much more trust in our institutions, but with age and experience I have grown to recognize how imperfect they are, and none of those imperfections disappeared when the president decided he wanted a new vaccine yesterday; in fact many of those problems were enormously amplified.


Novel medical technologies are developed all the time. Every year new life saving treatments and drugs come out that transform the lives of people all over the world. Medical technology is still very much on the vertical part of the technological development S-curve, and development and testing methodologies have been improved and refined over many decades of experience, in many countries around the world working together to develop best practices and cross-check each other's work.

Why are you particularly concerned about these ones? Are there any others of the many, many new medical technologies coming out for which you have the same concern, or is it all of them?


>Why are you particularly concerned about these ones?

I'm particularly concerned about this one because the critical safety evaluation process was accelerated by a factor of 5-10 and the clinical safety data is not available for third party review, however it was used to justify propaganda that has influenced (biased) all subsequent research. Just like software, the marginal gains of throwing money at a problem eventually approach zero, some things just take time, especially when you're evaluating biological side effects which may develop slowly.

I'm particularly concerned about this one because it hijacks your cellular machinery to manufacture a sudden megadose of an inflammatory protein (not a complete virus) which results in short term (at least) autoimmune behavior.

I'm particularly concerned about this one because it was effectively mandated for hundreds of millions of people with an obviously incomplete cost/benefit analysis and anyone who asks if maybe the benefit is overestimated and the cost is underestimated is immediately branded an anti-vaxxer/right wing conspiracy nut.

>Novel medical technologies are developed all the time. Every year new life saving treatments and drugs come out that transform the lives of people all over the world.

And how many of the candidates that make it to clinical trials never make it to market, or worse, are withdrawn after measurable harm? Now throw in the accelerated (rushed) safety analysis and systemic pro covid vaccine bias and it would be foolish not to be at least a little skeptical.

>and development and testing methodologies have been improved and refined over many decades of experience, in many countries around the world working together to develop best practices and cross-check each other's work.

And those testing methodologies still take time because of the nature of biology, you can't just snap your fingers and make adverse events happen more quickly. All those best practices and cross checking go out the window once a rigid sociopolitical orthodoxy solidifies around certain subjects, and suddenly few researchers are willing to risk careers for simply asking the wrong questions.

Nothing about the mRNA vaccine development process has been "normal", and these are emphatically not typical vaccines, this technology is unprecedented and its effects on the body are complex and difficult to study.


> the clinical safety data is not available for third party review

That’s best practice, due to the problem that access to data from past trials can bias the planning and design of new trials. It’s SOP from hard win practical experience over many decades.


The effect of vaccinations is huge here, especially if we focus on deaths and hospitalisations in elderly people. There are of course issues with using data that only observed and not part of a strict randomized and blinded trial. But the difference observed in many different studies between unvaccinated and fully vaccinated is simply so enormous that it doesn't really leave any room for doubt.

The effect of the vaccines is so huge that you only have to look at a Kaplan-Meier plot and will immediately see the huge difference. And we don't have a single study here, we have a lot of different ones that all indicate that the vaccines are very effective and safe.


I would trust sound data and transparent analyses showing age-stratified risk analyses if they existed. With all respect, what you just wrote is essentially hand-waving and talking past what I wrote. High effectiveness for elderly people (which I find plausible but also still deeply confounded by testing rate differences) in no way justifies mandates for people in their early 20s.

In some proportion of hospitals, when someone who is not vaccinated goes to hospital for any reason, they are tested for COVID-19. If found positive this is counted as a COVID-19 hospitalization. We have no access to the precise rate of these incidental hospitalizations. In some of these hospitals, it is the policy that vaccinated people who go to the hospital are NOT routinely tested unless they have symptoms. If this is the policy at a substantial number of hospitals, it could dramatically change the "effect size" of the measurements that we are talking about. The same issues essentially applies to COVID-19 deaths and cases.

Large effect size alone generally isn't convincing when you're using such fundamentally confounded sampling procedures, merged age-groups with wildly different risk profiles, and data aggregated across long time-windows with different population sizes.


These confounders are everywhere. Sort of in the way that only the dumb criminals get caught, we're probably missing many more confounding factors because our statistical analyses have limitations. Now add the stigma around questioning the vaccines political/financial pressures and you have a recipe for poor quality science.

It's not a conspiracy, people just tend to look away from things that could endanger their livelihoods. An emergent property of socioeconomic systems. A failure mode, so to speak.


Even if this particular vaccine was ok (and I still feel fine after the booster), the process seems shady if results are kept secret (having good processes is very important. remember "checks and balances")


Trial data is kept confidential due to the risk that access to the data or information about it by research teams could influence or bias the design or conduct of further trials. This is routine in medical trials based on hard won lessons from previous problems due to biased trials and conflicts of interest.

It might seem shady if you don’t know the reasons for it (what are they hiding?). I understand that, but the fact is this is standard practice and releasing the raw data would be reckless and irresponsible.


I find it very discomfiting that anyone suspicious of the COVID-19 vaccine (by that I mean the COVID vaccine/spike protein in particular) is branded as anti-vax. I say this as a vaccinated person who caught a heavy bout of COVID. I suppose I'm grateful for the vaccine, but mandates scare me.


as chistians say, "turn the other cheek". (labeling did not made much harm, dont get too ivested in that)

reality is, that we can't have policy/public sentiment that is nuanced. (I heard it breaks the timecontinuum)

Find the silver lining and move on. I'm just glad that, a lot of people are in pro-vaccine side overall. While in your case it is a bit of mob mentality (mostly appeal to authority + optimism + jugde_dissagreeing_side), in the long run "pro vaccines" is (even if only on average) leaps and bounds and couple of parsecs better than "no vaccines".


Do you school-aged children? Pets?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: