Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Predicting the set of words that constitutes a helpful response when given a friendly question is still valid in the world of stochastic parrots.

Reducing it's actions to "just predicting the next word" does a disservice to what it's actually doing, and only proves you can operate at the wrong abstraction. It's like saying "human beings are just a bunch of molecular chemistry, and that is it" or "computers and the internet are just a bunch of transistors doing boolean logic" (Peterson calls this "abstracting to meaninglessness"), while technically true, it does a disservice to all of the emergent complex behaviour that's happening way up the abstraction layer.

ChatGPT is not just parroting the next words from it's training data, it is capable of producing novel output by doing abstraction laddering AND abstraction manipulation. The fact that it is producing novel output this way is proving some degree of compositional thinking - again, this doesn't eliminate the stochastic parrot only-predicting-the-next-word explanation, but the key is in the terminology .. it's a STOCHASTIC parrot, not a overfit neural network that cannot generalize beyond it's training data (proved by the generation of compositional novel output).

Yes, it is only predicting the next word, and you are only a bunch of molecules, picking the wrong abstraction level is meaningless



all true, but those models are not thinking and slightly different prompt leads to dramatically different results quality.

it is true that those models can have amazing results, but they try to give most realistic answer and not correct or helpful one.

Because of fine tuning we very often get correct answers and sometimes we might forget that it isn't really what model is trying to do

To give you life analogy: you might think that some consultant is really trying to help you where it's just someone trying to earn money for living and helping you is just a way he can achieve that. In most cases result might be the same but someone eg. bribe him and results might be surprising




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: