Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Good points.

Perhaps building the representation is building understanding. But humans did that for Sora and for all the other architectures too (if you'll allow a little meta-building).

But evaluation alone is not understanding. Evaluation is merely following a rote sequence of operations, just like the physics engine or the Chinese room.

People recognize this distinction all the time when kids memorize mathematical steps in elementary school but they do not yet know which specific steps to apply for a particular problem. This kid does not yet understand because this kid guesses. Sora just happens to guess with an incredibly complicated set of steps.

(I guess.)



I think this is a good insight. But if the kid gets sufficiently good at guessing, does it matter anymore..?

I mean, at this point the question is so vague… maybe it’s kinda silly. But I do think that there’s some point of “good-at-guessing” that makes an LLM just as valuable as humans for most things, honestly.


Agreed.

For low-stakes interpolation, give me the guesser.

For high-stakes interpolation or any extrapolation, I want someone who does not guess (any more than is inherent to extrapolating).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: