Not a answer, but contributory idea - Meta-analysis. There are plenty of strong meta-analysis out there and one of the things they tend to end up doing is weighing the methodological rigour of the papers along with the overlap they have to the combined question being analyzed. Could we use this weighting explicitly in the training process?
Thanks. This is helpful. Looking forward to more of your thoughts.
Some nuance:
What happens when the methods are outdated/biased? We highlight a potential case in breast cancer in one of our papers.
Worse, who decides?
To reiterate, this isn’t to discourage the idea. The idea is good and should be considered, but doesn’t escape (yet) the core issue of when something becomes a “fact.”