I agree that most labs are probably not out to defraud people. But without replication I don't think it's reasonable to have much confidence in what is published.
replication happens over time.
For example, when I did my PhD I wanted to grow TaS2 monolayers on a graphene layer on an Iridium crystal. So I took published growth recipees of related materials, adapted them to our setup and then finetuned the recipee for TaS2. This way I basically "peer replicated" the growth of the original paper. I then took those samples to a measurement device and modified the sample in-situ by evaporating Li atoms on top (which was the actual paper but I needed a sample to modify first). I published the paper with the growth recipee and the modification procedure and other colleagues then took those instructions to grow their own samples for their own studies (I think it was MoS2 on Graphene on Cobalt that they grew).
This way papers are peer replicated in an emerging manner because the knowledge is passed from one group to another and they use parts of that knowledge to then apply it to their own research. You have to see this from a more holistic picture. Individual papers don't mean too much, it's their overlap that generates scientific consesus.
In contrast, requiring some random reviewer to instead replicate my full paper would be an impossible task. He/she would not have the required equipment (because there's only 2 lab setups in the whole world with the necessary equipment), he/she would probably not have the required knowledge (because mine and his research only partially overlap - e.g. we're researching the same materials but I use angle-resolved photoemission experiments and he's doing electronic transport) and he/she would need to spend weeks first adapting the growth recipee to the point where his sample quality is the same as mine.
That's not what publication is about. Publication is a conversation with other researchers; it is part of the process of reaching the truth, not its endpoint.
People in general (at least on da Internetz) seem to focus way to much on single studies, and way too little on meta-studies.
AFAICT meta-studies is the level where we as a society really can try to say something intelligent about how stuff works. If an important question is not included in a meta-study, we (i.e. universities and research labs) probably need to do more research on that topic before we really can say that much about it.
Sure, and scientists need a place to have such conversations.
But publication is not a closed system. The "published, peer-reviewed paper" is frequently an artifact used to decide practical policy matters in many institutions both public and private. To the extent that Science (as an institution in its own right) wants to influence policy, that influence needs to be grounded in reproducible results.
Also, I would not be surprised if stronger emphasis on reproducibility improved the quality of conversation among scientists.
Maybe replication should (and probably does) happen when the published thing is relevant to some entity and also interesting.
I never seen papers as "truth", but more as "possibilities". After many other "proofs" (products, papers, demos, etc.) you can assign some concepts/ideas the label "truth" but one/two papers from the same group is definitely not enough.
Yeah passing peer review doesn’t mean that the article is perfect and to be taken as truth now (and remember, to err is human; any coder on here has had some long standing bug that went mostly unnoticed in their code base). It means it passed the journal’s standards for novelty, interest, and rigor based on the described methods as a retained by the editor / area chair and peer reviewers that are selected for being knowledgeable on the topic.
Implicit in this process is that the authors are acting in good faith. To treat the authors as hostile is both demoralizing for the reviewers (who wants to be that cynical about their field) and would require extensive verification of each statement well beyond what is required to return the review in a timely manner.
Unless your paper has mathematical theory (and mistakes do slip through), a publication should not be taken as proof of something on its own, but a data point. Over time and with enough data points, a field builds evidence to turn a hypothesis into a scientific theory.
Unfortunately there's a lot of evidence that fraud really is very prevalent and we don't hear about it anywhere near enough. It depends a lot on the field though.
One piece of evidence comes from software like GRIM and SPRITE. GRIM was run over psychology papers and found around 50% had impossible means in them (that could not be arrived at by any combination of allowed inputs) [1]. The authors generally did not cooperate to help uncover the sources of the problems.
Yet another comes from estimates by editors of well known journals. For example Richard Horton at the Lancet is no stranger to fraud, having published and promoted the Surgisphere paper. He estimates that maybe 50% of medical papers are making untrue claims, which is interesting in that this intuition matches the number obtained in a different field by a more rigorous method. The former editor of the New England Journal of Medicine stated that it was "no longer possible to believe much of the medical research that is published".
50%+ is a number that crops up frequently in medicine. The famous Ioannidis paper, "Why most published research findings are false" (2005) has been cited over 12,000 times.
Marc Andreessen has said in an interview that he talked to the head of a very large government grant agency, and asked him whether it could really be true that half of all biomedical research claims were fake? The guy laughed and said no it's not true, it's more like 90%. [2]
Elizabeth Bik uncovers a lot of fraud. Her work is behind the recent resignation of the head of Stanford University for example. Years ago she said, "Science has a huge problem: 100s (1000s?) of science papers with obvious photoshops that have been reported, but that are all swept under the proverbial rug, with no action or only an author-friendly correction … There are dozens of examples where journals rather accept a clean (better photoshopped?) figure redo than asking the authors for a thorough explanation." In reality there seem to be far more than mere thousands, as there are companies that specialize in professionally producing fake scientific papers, and whole markets where they are bought and sold.
So you have people who are running the scientific system saying, on the record, that they think science is overrun with fake results. And there is some quantitive data to support this. And it seems to happen quite often now that presidents of entire universities are being caught having engaged in or having signed off on rule breaking behavior, like image manipulation or plagiarism, implying that this behavior is at least rewarded or possibly just very common.
There are also whole fields in which the underlying premises are known to be false so arguably that's also pretty deceptive (e.g. "bot studies"). If you include those then it's quite likely indeed that most published research is simply untrue.