This isn't true, though. There are a lot of openings where the humans turn out to be right anyway, even though the computer thinks it's found a marginally better move. Humans have a better intuitive understanding of how certain openings create endgame possibilities, and if I remember correctly the combination of a Super GM and an engine is still markedly stronger than an engine alone.
Engines have contributed substantially to modern opening books but they haven't supplanted the existing knowledge. Humans turned out to be wrong about many sharp lines (which were refuted by computer) and the computer can find really interesting ideas in many positions (which would be nearly impossible for a human to find) but the old human-approved Best Openings are still standing tall after the engine revolution.
> I remember correctly the combination of a Super GM and an engine is still markedly stronger than an engine alone.
This may have been true when engines were still only marginally stronger than humans, but I haven't seen any evidence that is currently true. A few years ago Nakamura + Rybka (previous best program) lost to Stockfish.
At the time Rybka was not one of the strongest engines anymore, and "correspondence chess" (human + computers) is still played.
The strongest players are not GMs, as far as I know, and a very important part of those games is trying to force positions where the opponent's engine might make a slight mistake
I can certainly accept that it'll always be the case that computer recommendations don't necessarily chime with human abilities and play style, and so accepting engine recommendations could be detrimental to human results. Beyond that, I'd be intrigued to see the numbers on the types of positions where engine evaluations diverge wildly from results if they're left to play out a position thousands of times. I heard this about the French Defence recently, for example, but without any real evidence.
Anyway, I was just making a slightly facetious point that all our existing openings came about by a slow, semi-random process in which people tried moves, played out the games, and then looked at the results.
I really disagree with the "random" part, though. Human openings need to at least have some "theory" around them to know what to do if your opponent diverges from the main lines.
Usually "real evidence" that the human choices sometimes are better than the machine ones is that the engines after a long time agree with the human choices.
Engines have contributed substantially to modern opening books but they haven't supplanted the existing knowledge. Humans turned out to be wrong about many sharp lines (which were refuted by computer) and the computer can find really interesting ideas in many positions (which would be nearly impossible for a human to find) but the old human-approved Best Openings are still standing tall after the engine revolution.