Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not the same logic because interns can make meaning out of the data - that’s built-in error correction.

They also remember what they did - if you spot one misunderstanding, there’s a chance they’ll be able to check all similar scenarios.

Comparing the mechanics of an LLM to human intelligence shows deep misunderstanding of one, the other, or both - if done in good faith of course.



Not sure why you're trying to conflate intellectual capability problems into this and complicate the argument? The problem layout is the same. You delegate the works to someone so you cannot understand all the details. This makes a fundamental tension between trust and confidence. Their parameters might be different due to intellectual capability, but whoever you're going to delegate, you cannot evade this trade-off.

BTW, not sure if you have experiences of delegating some works to human interns or new grads and being rewarded by disastrous results? I've done that multiple times and don't trust anyone too much. This is why we typically develop review processes, guardrails etc etc.


> not sure if you have experiences of delegating some works to human interns or new grads and being rewarded by disastrous results?

Oh yes I have ;)

Which is why I always explain the why behind the task.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: