I know of no review process that produces the same level of understanding as does authorship, because the author must build the model from scratch and so must see all the details, while the reviewer is able to do less work because they're fundamentally riding on the author's understanding.
In fact, in a high-trust system, e.g. a good engineering culture in a tech company, the reviewer will learn even less, because they won't be worried about the author making serious mistakes so they'll make less effort to understand.
I've experienced this from both sides of the transaction for code, scientific papers, and general discussion. There's no shortcut to the level of understanding given by synthesizing the ideas yourself.
> "the author must build the model from scratch and so must see all the details"
This is not true. With any complex framework, the author first learns how to use it, then when they build the model they are drawing on their learned knowledge. And when they are experienced, they don't see all the details, they just write them out without thinking about it (chunking). This is essentially what an LLM does, it short-circuits the learning process so you can write more "natural", short thoughts and have the LLM translate them into working code without learning and chunking the irrelevant details associated with the framework.
I would say that whether it is good or not depends on how clunky the framework is. If it is a clunky framework, then using an LLM is very reasonable, like how using IDEs with templating etc. for Java is almost a necessity. If it is a "fluent", natural framework, then maybe an LLM is not necessary, but I would argue no framework is at this level currently and using an LLM is still warranted. Probably the only way to achieve true fluency is to integrate the LLM - there have been various experiments posted here on HN. But absent this level of natural-language-style programming, there will be a mismatch between thoughts and code, and an LLM reduces this mismatch.
I believe pretty much anyone who has observed a few cycles can tell as much.
Often the major trigger for a rewrite is that the knowledge has mostly left the building.
But then there's the cognitive dissonance; because the we like pretending that the system is the knowledge and thus has economic value in itself, and that people are interchangeable. None of which is true.
It is similar to how much a student learns from working hard to solve a problem versus from being given the final solution. The effort to solve it yourself tends to give a deeper understanding and make it easier to remember.
Maybe that's not the case in all fields, but it is in my experience at least, including in software. Code I've written I know on a personal level, while I can much more easily forget code I've only reviewed.