Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is a good quote:

> “One could describe a team’s interactions as ‘sparky’ and confidently predict that their emotions would be subject to the same laws that govern the dielectric breakdown of air under the influence of an electric field. Alternatively, the interactions of a team of researchers whose journal articles are characterized by ‘smoke and mirrors’ could be modeled using the physics of airborne particulate combustion residues, combined in some way with classical optics"



To be honest I think this story is overplayed a bit. I've heard of that tipping point idea before, or the positive to negative ratio thing, but I think most in the field didn't take it more seriously than an observation that couples, groups, etc that function well are more positive and supportive.

It's good to call this stuff out but part of the reason no one else did is because outside of a certain subsubfield no one took it that seriously.


I strongly disagree with this take.

The whole point of science is to acquire knowledge that eventually gets communicated outside of the scientific body producing it. If the papers never leave the ivory tower, or never lead to other papers that do, it's just navel gazing.

When a scientific community has a chunk of their work that everyone in the tower knows is bullshit but never gets corrected, that metadata doesn't leave the ivory tower. What happens instead is that papers like this flutter out the window, get picked up by journalists who don't have access to tribal knowlege, and then get spread wide and far. The information appears legitimate because it did come from a trusted scientific body. It just had the necessary context that it's actually bullshit left off.

Every scientific community has an obligation to ensure that the papers they publish and advocate actually mean what they say they mean. If nothing else, because not doing that ultimately tarnishes the brand and undermines the trustworthiness of that community.


> Every scientific community has an obligation to ensure that the papers they publish and advocate actually mean what they say they mean. If nothing else, because not doing that ultimately tarnishes the brand and undermines the trustworthiness of that community.

And in the end of science as a whole.


Well said. Also "that metadata doesn't leave the ivory tower" is wonderfully concise.


Overplayed? It jeopardises the entire edifice of that subfield, and by extension psychology itself, if it is not vigorously disowned.

Sokal said it well (from the article):

> “For me, the real question is not about Fredrickson or Losada or Seligman,” Sokal says. “It’s about the whole community. Why is it that no one before Nick—and I mean Nick was a first semester part-time Master’s student, at, let’s be honest, a fairly obscure university in London who has no particular training in mathematics—why is it that no one realized this stuff was bullshit? Where were all the supposed experts?”

> “Is it really true that no one saw through this,” he asks, “in an article that was cited 350 times, in a field which touts itself as being so scientific?”


I think the answer to this is remarkably simple. You can align scientists and fields along a spectrum and group them into roughly two categories:

The scientists who make falsifiable theories and run reproducible experiments -and- everybody else.

On the left side of the spectrum is physicists working in long-understood areas, such as "predict the motion of elements of the solar system 100 years into the future using Newton's law and observed data", in the middle is "biologists explaining the behavior of C. elegans using a combination of molecular biology and machine learning", and on the far right side is "psychologists attempting to come up with universal rules about human behavior using studies with tiny n"

The folks on the far, far left have a fairly straightforward system that works pretty well. If you claim something like "FTL" or "new energy source", you describe your apparatus, publish your data, show people a demo in the lab, and if it's real, somebody else in the world will be motivated to replicate your experiment, or help show you why you're technically wrong. Almost nobody makes it far along this path by repeatedly publishing interesting, but provably wrong results.

In that middle zone, there's still an air of replicability, but important studies turn out to be really hard to replicate for dumb reasons, like "the temperature of the lab is off by 0.5C" or "you aren't using the exact same plasmid as mine, which I personally isolated when in <nobel prize winners lab> during my postdoc". It's actually quite easy to publish fairly wrong science, for quite some time, without anybody really noticing. For example, hundreds and hundreds of papers turn out to have copy-pasted gels, where the authors just made up results and published them and their conclusions. I guess nobody bothered to replicate that paper!

On the right side is psychologists. I don't even think this field, with the except of a few very limited areas, has the statistical power to make the strong conclusions authors come to, in 99.9% of papers. It's better to think of this area as people just sort of writing what they want to believe, and placing a pseudoscientific veneer on top. This would fall into the "not even wrong" category- it doesn't even really qualify to be evaluated as science. Or if it is, it falls into the category of "speculative complex systems theory".


Right after that statement the article describes how people did indeed see through this and quotes three such people.


350 citations is not much. Retracted studies get cited more than that.


You may well be right, but this is too casual an approach toward what purports to be science in a nominally good journal. Modern science is amazingly well insulated from “in-house” criticism after the review process. As a result crap can flourish. We need more data/analyst wasps.


The founders of that field got tens of millions of dollars for continuing that bunk research. You don't think that funding could have been used better?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: