Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> It was an interesting read, but the author showed absolutely no clue why pre-registration was important. The point is to avoid selection bias.

You've missed his fundamental point: the mechanisms he listed like pre-registration are excellent for defending against plucking signal out of random sampling error. They do nothing for when you are investigating a real effect which is not what you think you are investigating: the point of the spooky studies listed is that in the psi experiments, there is still a signal of something. We don't think it's actual psi in the sense of supernatural activity, but the alternate explanations seem almost as bizarre and have almost as painful implications.



I don't think the btilly thought he was refuting the article, just commenting on a not-very-central-but-important misperception that it seemed to contain, and it seems to be accurate - implicit precommital to experimental details in a replication solves some of what pre-registration is about but not all of it and arguably not the more important bit. This seems to be true whether or not the article's thesis holds, and this being true doesn't undermine the article's thesis.


I did not miss his fundamental point. I was making a different fundamental point about why I view virtually every meta-analysis out there to be even more suspect than regular research.

It was the point which is most commonly pointed out by Randi et al, but which would be most worrisome for the author because it throws into serious question the kind of research that he thinks should be most reliable.


> I did not miss his fundamental point. I was making a different fundamental point about why I view virtually every meta-analysis out there to be even more suspect than regular research.

It certainly sounded like you missed it, but regardless: meta-analysis is well-aware of the selection issue. Dealing with that is half the point of meta-analytic techniques - p-curves, heterogeneity, funnel plots, the binomial test, trim-and-fill, etc. Publication bias is an issue which has been repeatedly quantified, and these techniques work reasonably well: when meta-analyses are compared to very large RCTs where selection issues are not a concern, the agreement is pretty good (confidence intervals are blown a bit more than they should be, but not horribly so). So you're either ignoring OP's very interesting points or you're tendentiously overrating an issue. Neither is a worthwhile comment.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: