Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Source? This is unintuitive to me, I can't come up with a rationale.


I know of no review process that produces the same level of understanding as does authorship, because the author must build the model from scratch and so must see all the details, while the reviewer is able to do less work because they're fundamentally riding on the author's understanding.

In fact, in a high-trust system, e.g. a good engineering culture in a tech company, the reviewer will learn even less, because they won't be worried about the author making serious mistakes so they'll make less effort to understand.

I've experienced this from both sides of the transaction for code, scientific papers, and general discussion. There's no shortcut to the level of understanding given by synthesizing the ideas yourself.


> "the author must build the model from scratch and so must see all the details"

This is not true. With any complex framework, the author first learns how to use it, then when they build the model they are drawing on their learned knowledge. And when they are experienced, they don't see all the details, they just write them out without thinking about it (chunking). This is essentially what an LLM does, it short-circuits the learning process so you can write more "natural", short thoughts and have the LLM translate them into working code without learning and chunking the irrelevant details associated with the framework.

I would say that whether it is good or not depends on how clunky the framework is. If it is a clunky framework, then using an LLM is very reasonable, like how using IDEs with templating etc. for Java is almost a necessity. If it is a "fluent", natural framework, then maybe an LLM is not necessary, but I would argue no framework is at this level currently and using an LLM is still warranted. Probably the only way to achieve true fluency is to integrate the LLM - there have been various experiments posted here on HN. But absent this level of natural-language-style programming, there will be a mismatch between thoughts and code, and an LLM reduces this mismatch.


So the software lifecycle ends up with a sort of Zeno's paradox, each incremental maintainer understands the system less...fascinating, ty!


I believe pretty much anyone who has observed a few cycles can tell as much.

Often the major trigger for a rewrite is that the knowledge has mostly left the building.

But then there's the cognitive dissonance; because the we like pretending that the system is the knowledge and thus has economic value in itself, and that people are interchangeable. None of which is true.


On sufficiently large and old code bases, yes, this is exactly the case.


I totally agree.

Reviewing the solution is limited. What you don’t get are the myriads of other ways that didn’t work out.

Elegant solutions are the result of weeding out dozens of other messy ways.

So what gets perpetuated here then is the Dunnimg Kruger effect.

While it might be speed things up in many normal circumstances, it devalues hard work in the long run. Not good.


I've found that it is a force multiplier for my hard work.


The Dunning-Kruger effect, and its many misspellings, might be the most overused term on this website.


The industry is rife with it, and this is a haven for experienced professionals as well as newcomers.

One need only look at open source projects to see the massive variance in code quality that is out there.


Lol, 100%, it's overeducated for "dumb"


It is similar to how much a student learns from working hard to solve a problem versus from being given the final solution. The effort to solve it yourself tends to give a deeper understanding and make it easier to remember.

Maybe that's not the case in all fields, but it is in my experience at least, including in software. Code I've written I know on a personal level, while I can much more easily forget code I've only reviewed.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: