Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You can point out the errors to people, which will lead to less issues over time, as they gain experience. The models however don’t do that.


I think there is a lot of confusion on this topic. Humans as employees have the same basic problem: You have to train them, and at some point they quit, and then all that experience is gone. Only: The teaching takes much longer. The retention, relative to the time it takes to teach, is probably not great (admittedly I have not done the math).

A model forgets "quicker" (in human time), but can also be taught on the spot, simply by pushing necessary stuff into the ever increasing context (see claude code and multiple claude.md on how that works at any level). Experience gaining is simply not necessary, because it can infer on the spot, given you provide enough context.

In both cases having good information/context is key. But here the difference is of course, that an AI is engineered to be competent and helpful as a worker, and will be consistently great and willing to ingest all of that, and a human will be a human and bring their individual human stuff and will not be very keen to tell you about all of their insecurities.


but the person doing the job changes every month or two.

theres no persistent experience being built, and each newcomer to the job screws it up in their own unique way


The models do do that, just at the next iteration of the model. And everyone gains from everyone's mistakes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: