The website dies if I try to figure out who the author (“sam”) is, but it sounds like they are used to some awful backwater of academia.
They have this idea that a single editor screens papers to decide if they are uninteresting or fundamentally flawed, then they want a bunch of professors to do grunt work litigating the correctness of the experiments.
In modern (post industrial revolution) branches of science, the work of determining what is worthy of publication is distributed amongst a program committee, which is comprised of reviewers. The editor / conference organizers pick the program committee. There are typically dozens of program committee members, and authors and reviewers both disclose conflicts. Also, papers are anonymized, so the people that see the author list are not involved in accept/reject decisions.
This mostly eliminates the problem where work is suppressed for political reasons, etc.
It is increasingly common for paper PDFs to be annotated with badges showing the level of reproducibility of the work, and papers can win awards for being highly reproducible. The people that check reproducibility simply execute directions from a separate reproducibility submission that is produced after the paper is accepted.
I argue the above approach is about 100 years ahead of what the blog post is suggesting.
Ideally, we would tie federal funding to double blind review and venues with program committees, and papers selected by editors would not count toward tenure at universities that receive public funding.
The computer science practice you describe is the exception, not the norm. It causes a lot of trouble when evaluating the merits of researchers, because most people in the academia are not familiar with it. In many places, conference papers don't even count as real publications, putting CS researchers at a disadvantage.
From my point of view, the biggest issue is accepting/rejecting papers based on first impressions. Because there is often only one round of reviews, you can't ask the authors for clarifications, and they can't try to fix the issues you have identified. Conferences tend to follow fashionable topics, and they are often narrower in scope than what they claim to be, because it's easier to evaluate papers on topics the program committee is familiar with.
The work done by the program committee was not even supposed to be proper peer review but only the first filter. Old conference papers often call themselves extended abstracts, and they don't contain all the details you would expect in the full paper. For example, a theoretical paper may omit key proofs. Once the program committee has determined that the results look interesting and plausible and the authors have presented them in a conference, the authors are supposed to write the full paper and submit it to a journal for peer review. Of course, this doesn't always happen, for a number of reasons.
They have this idea that a single editor screens papers to decide if they are uninteresting or fundamentally flawed, then they want a bunch of professors to do grunt work litigating the correctness of the experiments.
In modern (post industrial revolution) branches of science, the work of determining what is worthy of publication is distributed amongst a program committee, which is comprised of reviewers. The editor / conference organizers pick the program committee. There are typically dozens of program committee members, and authors and reviewers both disclose conflicts. Also, papers are anonymized, so the people that see the author list are not involved in accept/reject decisions.
This mostly eliminates the problem where work is suppressed for political reasons, etc.
It is increasingly common for paper PDFs to be annotated with badges showing the level of reproducibility of the work, and papers can win awards for being highly reproducible. The people that check reproducibility simply execute directions from a separate reproducibility submission that is produced after the paper is accepted.
I argue the above approach is about 100 years ahead of what the blog post is suggesting.
Ideally, we would tie federal funding to double blind review and venues with program committees, and papers selected by editors would not count toward tenure at universities that receive public funding.