Totally on point, regarding multivariate relationships! The problem is not that they do T-test (and other Stat-101-jargon blackbox stuff), but that they stop at it. To many of them, even the existence of multivariate effects is beyond their imagination.
So, inference to many biomedical folks is just 1-dimensional. Big data has a long way to go to penetrate fields where people cannot think in more than 1 dimension!
The best way to get published is to use a variation on a method that was used many times before. Since the novelty you're focused on is something biological, you stick with the same statistical methods that have gotten published for the last decade.
Unfortunately, there just doesn't seem to be much tenure-juice from innovating in statistical methods for most life-science fields. Not all, of course. Science moves slowly.
Eh, look at the journals for the reasoning there. Many editors and reviewers don't have the time to untangle multi-variate effects and make certain the math was right. Maybe that was because they can't do it themselves, but usually if it is that complicated and interconnected and you come out with a 'clear' picture, odds are that it was an error. I'm NOT saying that is in fact true at all. However, most editors get so very many submissions every day that they take the easy to understand papers with clear results and pathways over the harder to understand ones that are more likely to have errors in them as they are more complicated. This feeds backwards into the grad schools and then only simple pathways and mechanisms are encouraged and nurtured, while complex ones are left alone in the publish-or-perish environment. And yes, this is BS, but that is how grants get funded.
So, inference to many biomedical folks is just 1-dimensional. Big data has a long way to go to penetrate fields where people cannot think in more than 1 dimension!