Don't do any of this. It's very outdated advice. And you're going to get it wrong anyway. These threshold adjustment methods were invented before we had access to reasonable computers.
You shuffle the data. Say you want to know if viewing time is affected by color. Literally randomly shuffle viewing time and color. Then, look at that distribution. Is the data that you observed significant?
As long as you shuffle everything related to your experiment and you don't double dip into the data you're going to get things right.
This also has the big advantage that it doesn't overcorrect like traditional methods which apply such strict corrections that eventually it's impossible to get significant results.
This post hasn't even begun scratching the surface on what can go wrong with traditional tests. Just don't use them.
This has nothing to do with speed or rigor. Permutation tests are much simpler to run and faster to analyze. Sadly we keep teaching crappy statistics to our students.
Permutation tests don't account for family-wise error rate effects, so I'm curious why you would say that "it doesn't overcorrect like traditional methods".
I'm also curious why you say those "cover every case", because permutation tests tend to be underpowered, and also tend to be cumbersome when it comes to constructing confidence intervals of statistics, compared to something like the bootstrap.
Don't get me wrong -- I like permutation tests, especially for their versatility, but as one tool out of a bunch of methods.
Not only has Haskell "abandoned" cons cells, its lists were never cons cells at all. The type system forbids (a . 1) entirely.
In Haskell there's a movement to get rid of much of the special syntax for lists. And a lot of code has moved on from lists. Many people never use them, I don't.
What killed lists is the reality of their horrific performance. They were great 40 years ago. But memory and CPU speed have scaled with different factors that make lists almost always the wrong choice.
Yeah, before Uber and Lyft I would get a Mercedes.
Except that it took forever. I had no idea when anyone would show up. The driver was annoyed and drove like an insane person. The few times I've actually feared for my life have been on highways with taxi drivers. It was incredibly expensive.
Oh, and half the time they ripped you off.
Yup. And there was no tracking. So if that person wanted to say, drive an insane route? Enjoy. Take a detour. Done. Or dump your body in the woods. You were totally at their mercy.
The taxi system was horrible. The pinnacle of protectionism carving out its niche of crap.
Don't get your hopes up. This is astronomically away from anything real. The hard part hasn't even started.
All they know right now is that humans can tolerate their blood product. They have no idea if it actually helps. And testing that is going to be an ethical mess.
We've already been through this! PolyHeme was developed for decades, went into trials in 2009, and was a disaster. https://en.wikipedia.org/wiki/PolyHeme
Testing PolyHeme was a landmark in research ethics in the US. Obviously not in a good way. The problem is that you can only test these things in people who are very sick and then you hope that you aren't killing them. That's sketchy at best.
In my space, "mostly right enough" isn't useful. Particularly when that means that the errors are subtle and I might miss them. I can't write whitepapers that tell people to do things that would result in major losses.
Everyone is jumping the gun here. You just haven't told us enough.
How much time do you have? How much do you know about hardware and software? What do you want to do? How much money do you want to invest? What do you want to learn?
The path forward changes entirely depending on the answers to these questions.
> They can’t be fired, but if they stop bringing in grants, they can face steep pay cuts and lab closure.
This describes the situation of tenured faculty (who are definitely who I had in mind when I referred to splashy big names), but universities have long been moving to a model with as few tenured or tenurable faculty as possible, where some instructors are full time but non-tenure-track, and others are part time (and so, for example, don't have to be paid benefits). At my university these are called lecturers and adjuncts, but other names exist. Both jobs involve renewable contracts (of different lengths), so they need not even be fired, just not have their contracts renewed.
"I looked at the representations of a network and I don't like them".
Great! There's no mathematical definition of what a fractured representation is. It's whatever art preferences you have.
Our personal preferences aren't a good predictor of which network will work well. We wasted decades with classical AI and graphical models encoding our aesthetic into models. Just to find out that the results are totally worthless.
Can we stop please? I get it. I too like beautiful things. But we can't hold on to things that don't work. Entire fields like linguistics are dying because they refuse to abandon this nonsense.
There's a far simpler method that covers every case: permutation tests. https://bookdown.org/ybrandvain/Applied-Biostats/perm1.html
You shuffle the data. Say you want to know if viewing time is affected by color. Literally randomly shuffle viewing time and color. Then, look at that distribution. Is the data that you observed significant?
As long as you shuffle everything related to your experiment and you don't double dip into the data you're going to get things right.
This also has the big advantage that it doesn't overcorrect like traditional methods which apply such strict corrections that eventually it's impossible to get significant results.
This post hasn't even begun scratching the surface on what can go wrong with traditional tests. Just don't use them.
This has nothing to do with speed or rigor. Permutation tests are much simpler to run and faster to analyze. Sadly we keep teaching crappy statistics to our students.
reply