Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That is a meaningless definition of prediction if "what is a good next word" has an ever changing definition in humans (as everything would fulfill that definition).


That's the very definition of production in an LLM.

What does "has an ever changing definition" mean?

And why "everything would fulfill that definition"?

At any time whats the "good next word" is based on the state created by our inputs thus far (including chemical/physiological state, like decaying memories, and so on). And not only not "everything fullfil it", but it can be only a single specific word.

(Same as if we include the random seed among an LLM output: we get the same results given the same training and same prompt).


"it can be only a single specific word" - that is incorrect as a human can change the process to generate the next word, up to and including, using a random process to create or select the next word (i.e., any word would be fine).

You could say the process chosen is somehow predetermined (even if the choices then are all made by using randomness), but then really the word "prediction" has very little meaning as the criteria to what is a "good next word" have a nearly unlimited and ever changing range as the generating process changes.


>"it can be only a single specific word" - that is incorrect as a human can change the process to generate the next word, up to and including, using a random process to create or select the next word (i.e., any word would be fine).

That's also exactly what an LLM does.

It's still only a single specific word if (as I wrote above) you take the seed into account too (i.e use the same input, including same random seed value).

If you mean to answer "yes, but LLMs use a random number generator, whereas humans can actually pick a word at random" I'd answer that this is highly contested. Where would the source for such randomness be in the universe (exept if you beg the question, and attribute it to an "soul" that is outside the universe)?


Claiming that the universe has no randomness is a very strong claim and moves beyond our ("standard") understanding of quantum mechanics. For example, a human could be using radioactive decay to sample randomness (and such devices are available).

An LLM is bound to what an LLM can, while humans can construct and use tools to go beyond what humans can do. Being a universal function approximator does not give access to all processes in the natural world.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: