This is completely wrong. Here's a collection of research on Wikipedia's reliability. In fact, it's been found to be as accurate as Encyclopedia Britannica in multiple studies. Its pharmacology entries, for example, are more accurate than textbooks. Other examples are included as well.
https://en.wikipedia.org/wiki/Reliability_of_Wikipedia#Compa...
Quite ironic to link to Wikipedia as your source for Wikipedia's reliability. I do think that Wikipedia is quite reliable, so I don't have a problem with that, but I think the article doesn't quite support your point.
While multiple studies comparing several encyclopedias are mentioned, they tend to have small sample sizes (only a few articles) and being on par with other encyclopedias doesn't necessarily make Wikipedia reliable, it could just as well be that the other options are similarly unreliable (it seems like all studies found something to criticize).
Additionally, I interpret "Wikipedia's information about pharmacology was 99.7% accurate when compared to a pharmacology textbook" to mean that the pharmacology textbook was taken as the gold standard and Wikipedia got pretty close. But 3 errors per 1000 (whatever their unit of measurement was) might not be reliable enough to be used by pharmacologists.
Every statement in that particular section has sources linked, which means you can see for yourself whether it's true or not. Are you saying studies published in Nature or New England Journal of Medicine can't be trusted?
I do think that Wikipedia is quite reliable, so I don't have a problem with that, but I think the article doesn't quite support your point.
Compared to alternatives, Wikipedia is perhaps the best at combining high reliability, "freshness" and breadth of content. What's your benchmark for measuring reliability? I can't think of many others.
There is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance. Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias. In this essay, I discuss the implications of these problems for the conduct and interpretation of research.
I'm saying that Wikipedia has a much more nuanced discussion of its reliability (including drawbacks of the studies involved) than the summary in your comment.