In studies such as these with huge intersubject variability, and probably also quite large intrasubject variability from day to day on many scales, it makes sense to look at change from baseline for individuals in the various groups rather than just differences in raw scores between treatment groups.
It seems by a cursory glance that most analyses done were of the latter kind, and I think then a study such as this, with low N, is expected to not show very much effect of any kind. (Since the intersubject variability and potentially also the natural intrasubject variability for most measured scales seem higher than any expected treatment effect).
I am almost certain that e.g. none of the approved and quite convincingly working SSRI:s would have shown any efficacy in a study with this design and similar N.
I don't see how this may be relevant to the usefulness of microdosing.
- Assume that a microdose either improves an outcome, or does nothing. Then, averaged over a large group, such random improvements would lift the group's outcome a little. It's a microdose worth taking.
- Assume that a microdose can both improve and worsen the outcome, or have no effect. If the experimental group's averaged results are indistinguishable from the control group's averaged results, it means that the microdose worsens the outcome about as often and / or as much as it improves it. This, to me. means that a microdose is a gamble not worth taking.
There is, of course, a difference between a microdose having no effect at all, or having an effect which can be either positive or negative. This difference is important for further research. For usage here and now, it's sadly irrelevant.
If a microdose only works when the person taking it knows it works, then it's basically the placebo effect. A good placebo can be useful, at least for commercial purposes.
>This, to me. means that a microdose is a gamble not worth taking.
This has always been the problem with psychedelics as a therapeutic approach. It's hard to reconcile being a responsible clinician and recommending a therapy with such mixed and often-negative results. They are in charge of a person's mental well-being in a way that people evangelizing psychedelic therapy don't seem to properly appreciate. If a patient is interested in that therapy, they might be a better candidate, but even then... if the results aren't great it's hard to justify.
Don't get me wrong, I have no problem with psychedelics for personal use, and I've seen them do wonderful things for people. But I've also seen them do horrific things to people and I feel like a lot of young people have a great trip and then immediately conclude that literally every human being NEEDS to go out and trip without any more consideration. That same sort of evangelism carries over to the microdosing realm. It may have a place! But personally, I've tried it with psiolocybin, and I found that typical microdoses have a very detrimental effect on my ability to focus.
Keep in mind this is study is evaluating the claims that microdosing lead to specific outcomes such as enhanced wellness and cognitive enhancements, which people seek out while microdosing. That's different from using strong doses of psychedelics to, say for example, break entrenched thought patterns (addiction, depression, OCD).
In the former case, a positive outcome is expected and therefore negative outcomes are in a sense less tolerable, especially since a false positive would lead to people repeatedly microdosing over an extended period of time, to their long-term detriment.
In the latter case, a "negative experience" does not preclude getting the desired results. And an acute negative experience in a one-time dose may be tolerable when contrasted with the long-term severity of the pathology it is meant to treat.
As far as I know experiments of CBT + psychedelics have shown very promising results for treating PTSD and certain kinds of depression, specially in people that are about to die and need to come to terms with that.
These are not microdosing experiments though. The usual way it is handled is that you get a few weeks of CBT then you get half a dose and the following week you get a full dose followed by more weeks of CBT. There is no placebo effect here that's possible because, well, it is impossible for someone to believe they have taken a full dose of pscychodelic drugs and not feel anything. But you still have a control in this case because you have a group that goes through CBT and another group that goes through CBT + drugs.
In fact, from these studies there have been very little "bad trips", mostly I guess because you have the half-dose session and then the full dose session in a very controlled environment and so on. These experiments are obviously done quite responsibly.
So I think you're mixing things up a little bit and I would suggest doing a little more research in the topic. Psychodelic therapy, at least the kind that is being explored seriously is not really done through microdosing.
Just a small note but my understanding is that “CBT” aka cognitive behavioral therapy is a specific form of therapy, while most research around psychedelic assisted therapy centers around traditional talk therapy rather than a more specific technique.
> There is no placebo effect here that's possible because, well, it is impossible for someone to believe they have taken a full dose of pscychodelic drugs and not feel anything.
From what I recall, a low dose of methylphenidate is used as a control in many studies because it has some of the same side effects without the trip. For psychedelic-naive people who don’t really know what to expect, I could see it being an ok control. Once you’ve experienced a single high-dose trip though… yeah… you’re never going to trick someone with it.
I wonder how much of this is true for therapy in general. When I've looked into it, the evidence for cognitive behavioral therapy (CBT) is a lot less than I would have expected, and there seems to be some question about how much of its effect is due to the placebo effect. This study on microdosing makes the point that if an experiment isn't double-blind - if the researchers know what is the placebo and what isn't - than this will have an impact on the results (and they seem to think this is the reason their double-blind experiment got different results). But I'm not sure it's possible to have double blind experiments when it comes to therapy (someone correct me if I'm wrong), and would likely make many single-blind studies of therapy appear more useful.
If the natural variability is high enough, then any effect will be hidden by it. Sure, we can say with some certainty that the effect of microdosing on most scales are not that huge. But we wouldn't have expected very large effect sizes anyway, because they are almost unheard off. (Unless the subjects become really severely impaired)
And there may still be meaningful treatment effects, at least judging by current standard of care for many psychiatric ailments and how those have performed in studies.
Again, I'm quite confident that the effect of many current psychiatric standard of care treatments would never have been picked up by this study. Not because they don't work (at least somewhat), but simply because there is too much noise and natural variability.
If the observed effect of an intervention does not rise above the background variance it means that the intervention does not do what you want it to do.
Either (1) it has no effect or (2) the strength and direction of the effect are random. Either of those qualities renders the intervention ineffective. You can’t justify giving a patient something that will have no effect, or will make them worse off half of the time when there are more effective options are on the table.
Of course this is comes with the caveat that the study is adequately powered to detect the strength of the effect you are looking for. That being said, there is plenty historical data for the placebo, so I don’t think that being underpowered because of misestimation of the background variance would be an issue.
> it makes sense to look at change from baseline for individuals in the various groups rather than just differences in raw scores between treatment groups.
That’s called cherry picking. Results are always noisy. You could give two groups of people the same tests on different days without any drugs at all and some subset would show “improvement”. If you start focusing on the individuals that show the result you want to see, you cherry-pick your way into false results.
This is a well-known way for researchers to abuse variable or noisy data sets to misleadingly show the result they want to show.
> I am almost certain that e.g. none of the approved and quite convincingly working SSRI:s would have shown any efficacy in a study with this design and similar N.
Thats not true. SSRI studies with ~30 people will show a trend toward improvement in the SSRI group that exceeds the placebo group. I think you’re confusing the different statistical measures.
This study showed that expectations and placebo effect were the predictor of micro-dosing success. The blinded group and unblinded group showed completely different results.
> > it makes sense to look at change from baseline for individuals in the various groups rather than just differences in raw scores between treatment groups.
> That’s called cherry picking. Results are always noisy. You could give two groups of people the same tests on different days without any drugs at all and some subset would show “improvement”.
No, GP is talking about a matched pairs design. You look at the difference between the scores of very similar individuals by applying one treatment to each (active and placebo), or, applying both treatments to the same individual (in random order).
Cherry-picking would mean only using scores from selected individuals, whereas matching only emphasizes the difference.
They're not saying you should look at some subset of individuals with positive results, you have misread.
The notion is that the difference between participants can mask the effect of the drug, such that comparing any individual participant to anyone but themselves is improper.
Claim what? If you go through the results and exclude the individuals who didn’t show the outcome you wanted to see, that’s called cherry-picking. It’s a well-known phrase.
There is a concept of subgroup analysis in studies like these, but you have to be careful about how it’s done and what conclusions are drawn. If you simply select positive results and exclude negative results then even the placebo group would show great success.
This study showed that telling people that they were microdosing was more important for the perceived outcome than the micro dosing itself. In other words, placebo is key to making it work.
> If you simply select positive results and exclude negative results then even the placebo group would show great success.
Where did zosima ask to do that? They mentioned that the variable of comparison should be changes from baseline metrics in treatment group vs changes from baseline metrics in control group. That would be a fair study, and isn't cherry picking.
It seems by a cursory glance that most analyses done were of the latter kind, and I think then a study such as this, with low N, is expected to not show very much effect of any kind. (Since the intersubject variability and potentially also the natural intrasubject variability for most measured scales seem higher than any expected treatment effect).
I am almost certain that e.g. none of the approved and quite convincingly working SSRI:s would have shown any efficacy in a study with this design and similar N.