Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I worked on one in the 90's that was $50,000 a day for three years. The drug company, assuming the trial was successful, had about 4 years to make all its money back before the patent expired. Testing blood, urine, and stool for multiple things is expensive.

Intellectually, I can understand the need for control groups, but I still think it's immoral. When you stare at a spreadsheet and see 70% of the control group is dead because some random number generator sorted them there like hell's own sorting hat, and the 98% of the people getting the drug are alive, you have no business talking about statistics. That graph will haunt me til my dying day.



You didn't kill 70% of the control group, you saved 98% of the experimental group.

Also, with the appropriate trial design, you can stop the control and transfer patients once you see these big differences. Same thing should happen if you see the opposite, e.g. killing 70% of your experimental group.


Everyone knew the results of "standard treatment", that's why they were researching the drug. There were years and years of statistics.

We killed 70% of the control group. Doomed by a random number generator.


> Everyone knew

If that was truly the case, then - as the parent hinted - it was the trial design that was the problem, and not the practice of having control groups.

It is pretty common to stop trials early when it is obvious that the treatment works. Conversely, it is easy to think that "yea of course the new treatment works" despite the evidence not being there. The need for robust analysis must also be respected.


It was my understanding that every study has a control group getting the standard care. I didn't see a study without such a control group.


Yes, exactly. Control groups are incredibly important for ensuring good quality of clinical studies. It's a technique that solves multiple "calibration" problems, including:

- being able to draw causal conclusions

- being able to adjust against placebo effects

A lot of clever people have tried to come up with ways of doing away with control groups. But ultimately, the best we can achieve is to stop early, as soon as the trial has a clear outcome. I do perhaps think this has become more common in recent times though, so perhaps the study you were involved in was at a time when early stopping wasn't really "the done thing".

But it still beats "studies" done 100 years ago, when you might give someone a cough mixture, see that they improved (if they died, let's just ignore that), and conclude that it was the cough mixture that did it!


> the best we can achieve is to stop early, as soon as the trial has a clear outcome

That must be really hard: if you wait for 95% confidence, you are selecting for 5% noise. If you repeatedly re-measure for 95% confidence, you strongly select for random noise.

Not many medical advancements provide such an anomalously strong signal (98% survival versus 30% survival).


> If you repeatedly re-measure for 95% confidence, you strongly select for random noise.

You can't use standard methods for early stopping - as you rightly point out, you get gibberish if you naively keep peeking at a growing data set. Instead, you have to use statistical methods that explicitly adjust for the repeated sampling in early stopping trials. This does make early stopping more complicated to analyse than a trial with a pre-determined duration.


No, in fact it is quite common to do "open label" clinical studies in which patients know what drug they are getting and no one is given the current standard of care. This is especially common in cancer, where the standard of care is so poor, and in rare diseases, where the patient population isn't large enough to admit such a comprehensive study.

It is harder to have the same statistical confidence of efficacy and safety in such studies, but clinical researchers try to address those issues by varying e.g. dosage quantities, time in between doses, etc.

Source: place I work is currently doing studies of this nature, and in general such studies seem to be well-understood and accepted by the FDA.


Randomization is independent from blinding. Open label studies can be either randomized or not, and controlled or not.

The FDA will usually push for blinding if possible (sometimes it is not). They will also usually push for a randomization zed control with standard of care. They way the FDA views it (and I agree), is that control patients are not harmed because they are still getting the same care they would outside of the trial.

It is usually unethical to have a treatment free arm. However, this has its own problem, where if you keep using equivalence comparisons, the end of the chain might not actually be any better


For such a large effect it is quite possible to implement an adaptive trial design with unbalanced arms and interim analysis for efficacy. But you have to ask for this, the FDA will not necessarily suggest it directly.


Not trying to invalidate your experience, but I think it is interesting and disagree.

I suspect there are some people with different philosophies that that sit better with this sorta thing.

It seems like you feel guilt not only for the harm you cause, but the harm you fail to prevent. Do you apply this logic to other parts of your life?

How do you feel about the trolley problem, were you have to kill some to save others?

How do you feel about the moral imperative of doctors to do no harm versus a utilitarian approach of maximize lives saved?


I don't feel guilt, I still feel rage. I didn't determine any of the parameters or rules of the trial so I have no guilt on me. I just observed that the control group is dead because of some belief that they provided value when we already knew exactly what was going to happen to them.

If you can prevent harm to others, then not doing so is just being an a$$. The trolley problem is just counting souls and really doesn't happen much in real life. The true problem is thinking the only choices are us/them versus everyone. I'm not a doctor so I have no idea how their ethics applies to their professional decisions.


This does not happen in a vacuum - it is not about this group at all. It is all about the future and if this will put a stop to a lot more people dying in the future. If it is only going to save 20 people a year, it hardly matters, but if 1,000,000 people will be saved every year, that's different story.

I don't know how bit the groups were, but if you have 500 people in the control group and 150 die, and you save 1,000,000 per year, that is a great trade-off.

Plus, when you go into this experiment, you know you it is an experiment and know in advance it is a 50/50 proposition that you will get the drug/placebo, but at least you have a 50% chance. Better than no chance. And personally, which normally I hate personal experiences, but personally, I wouldn't mind giving up my life in a test if I know I might have a 50% chance of getting the drug, but if not, I save 1,000,000 people a year (or whatever it is).

It's like being in the army. If you are the commander, the general, you might have to send 5,000 men charging up a hill that you know is well-defended and 80% will die, but you are using it as a diversionary tactic and take 80,000 men towards your real objective and you win it, then too bad for the 5,000 men. It's just the way things are. And if someone doesn't have the stomach to be the general, then they shouldn't be the general and look for another line of work.

That's how I see it.


Control groups exist because they get us closer to global optimality.

We can't eliminate suffering and dying of untreated people but it's generally considered ethical to eliminate suffering and dying of many while withholding that treatment from a subgroup of the study ("allowing them to die" by not taking action, rather than the "causing them to die" by taking action).

as long as that selection is done randomly and "fairly" I think it's an entirely acceptable risk. There have been trials that were truly badly run, and people died and suffered more than necessary, due to mistakes (often amateur ones) in the protocol. I'm more concerned about those types of deaths.


Thanks for clarifying your position


If that was the case, the study actually was unethical.

Some designs do include interim analyses and stopping rules.

However in my view the real scandal is we are still using NHST for clinical trials. We should be continuously updating a sensible prior for the effect size and approving the drug/stopping the trial when we have sufficient evidence one way the other.

As it is, the results of many underpowered trials are essentially thrown away because p > .05, which is stupid. This says nothing about the balance of evidence for the efficacy of an intervention… only the inertia and innumeracy of many clinicians preserves the tradition.


Yeah if it's a potentially life saving treatment you really need that design that allows peeking at the results.


Peek-and-do-more is compatible with double-blindness, as long as the peeker-and-decider is different from the dispenser and patient.


It's not unusual for trials to be stopped earlier for that exact reason. However, you can't just glance at a spreadsheet and say "it's working!".

Statistical analyses need to be pre-defined and powered to measure an effect.

I've been on clinical trials where the innovator drug showed a fantastic effect early on, then two overall survival curves crossed a year later.


It's possible to stop a trial because the new drug is much more effective than the old drug. It's also possible to stop a trial because the new drug's significantly worse than the old. We've done both. Also, I hope that no one ever tells you that you have to do a drug schema change because "too many kids are dying."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: