This is an amazing list, though I'm curious given how accurate the rest of this list is how you drew the conclusion that code reviews are a waste of time?
Three people commented on my calling code reviews a waste of time. To kick off my discussion, I should reveal that I stopped doing code reviews 20 years ago. That actually turns out to be significant. Here's why.
For the first 20 years of my career I worked at 5 different places and so have some experience with different development cultures. They all did code reviews and they all had the same flaw - they focused almost exclusively on all the wrong things. There would be endless arguments about indentation, braces placement, variable naming and so on. A little bit of the time was spent on what actually matters - the implementation. C++ (which was hugely popular in the 90's) made this even worse - now people could argue about whether a function should be overridable (virtual), whether members should be private, and on and on and on and on. After a few years of working on our own internal framework and working with commercial frameworks it dawned on me that nobody got it right. Meanwhile all these inane arguments led to time being wasted, projects missing their dates, team members arguing over stupid crap - ugh! I saw this with multiple teams across multiple organizations. Different people, same problem.
So what happened 20 years ago? That's when what we now call CI/CD tools first started appearing (remember Cruise Control?). With that we saw the advent of automated unit test frameworks, tools that could check the code against coding guidelines, and tools that could measure the amount of CRAP (Change Risk Analysis and Predictions, remember Crap4j?) and assess the security risk the code imposes. All of this is automated. I saw that automating the code reviews could get rid of a lot of the negativity, be objective, and focus on the implementation and quality of the code. This is the way.
When code is pushed to the repo I have evidence of it's quality: I know it conforms to our coding conventions (which I'm not a big believer in, but some people like to die on that hill), it's been tested (we require 40% test coverage and getter/setter tests don't count), it's CRAP score has been assessed, and it's had a security scan looking for vulnerabilities. All automated.
When working with junior developers they get immediate feedback on the quality of their code and can fix it. If they don't understand how to fix it then they reach out to one of us and we can help mentor them as to what is being reported and what the actual issue is. It's all very positive.
What I should have said then was manual code reviews are a waste of time - you want to automate that activity! I don't ever want to see a code review activity scheduled on my calendar!
Thanks for your response. I definitely agree that bikeshedding can be a problem in code reviews and bikeshedding itself is best avoided for sure. Automating that stuff away with linting tools is a good tool in the toolbox for that, and so to is having some conventions and/or understanding and accepting preferences of different team members and not actively engaging in bikeshedding over things that are a difference, but not a difference that makes a difference.
In my experience the value of code reviews doesn't come from nitpicking those types of things. That's by far the worst part of code reviews and is definitely annoying but mostly becomes not a problem if you employ the abovementioned tools/practices. Rather the value comes from others picking up mistakes in the quality of your implementation that arises from some gap in knowledge that the reviewer is able to pick up on and fill in. And to that end they're absolutely essential. They're not simply a box ticking exercise either - the amount of code reviews I've both done and received that picked up on actually important issues is in the high double digit percentages. I would take no code reviews to be a major red flag.