Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This idea was championed by the Stochastic Parrots paper. They assumed LLMs are just pattern learning, which doesn't make sense.

- the simple-to-hard RL method recently discovered is one argument against it, the models can reason their way out on harder and harder problems

- zero shot translation shows the models really develop an interlingua, a semantic representation, otherwise they wouldn't be able to translate between unseen pairs of languages

- the human in the room is also important. LLMs are like pianos, we play prompts on the keyboard to them, and they play back language to us. The quality and originality of the output aren't solely inherent to the model, but are co-created in the dialogue with the prompter. It's not just about the instrument, but also the 'musician' playing it.



> - the human in the room is also important. LLMs are like pianos, we play prompts on the keyboard to them, and they play back language to us. The quality and originality of the output aren't solely inherent to the model, but are co-created in the dialogue with the prompter. It's not just about the instrument, but also the 'musician' playing it.

So you agree they aren't reasoning? Otherwise why would you need a human to do skillful prompting? Why can't the AI just solve it on its own?


> So you agree they aren't reasoning? Otherwise why would you need a human to do skillful prompting? Why can't the AI just solve it on its own?

That's the argument kings used to justify looking down on peasants.

AI, as it is currently, has no motivation of its own; it is just as "happy" to solve a business problem as it is to write Chinese poetry about Swiss software engineers taking their dogs on a ride up the Adliswil-Felsenegg cable car.

Good communication skills are still important to get what you want, otherwise we merely descent into a particular SMBC comic: https://www.smbc-comics.com/index.php?id=3576

(This isn't to say the AI we have now are "people", nobody knows how to even test for that yet; I hope we figure this question out some time soon, preferably before we need to know the answer…)


> That's the argument kings used to justify looking down on peasants.

Peasants do become kings though, but we still need a human prompter for these AI models instead of just connecting them to a bug tracker.

> AI, as it is currently, has no motivation of its own; it is just as "happy" to solve a business problem as it is to write Chinese poetry about Swiss software engineers taking their dogs on a ride up the Adliswil-Felsenegg cable car.

You don't need any more motivation than any other worker, you do the tasks assigned to you but currently you need a prompter middleman.

> Good communication skills are still important to get what you want, otherwise we merely descent into a particular SMBC comic: https://www.smbc-comics.com/index.php?id=3576

But currently you communicate that with a human that then translates your needs to an LLM, why do we still need that middleman? There are so many well described problems and projects out there already, why can't you connect an LLM to them and make it complete those projects on its own?


> Peasants do become kings though, but we still need a human prompter for these AI models instead of just connecting them to a bug tracker.

What I said is sufficient to show why the parent comment wasn't necessarily agreeing "they aren't reasoning".

Likewise:

> You don't need any more motivation than any other worker, you do the tasks assigned to you but currently you need a prompter middleman.

In response to "Otherwise why would you need a human to do skillful prompting":

The tasks being assigned to me are prompts. They're prompts written in the form of a JIRA ticket, but they're prompts.

If you point me at a codebase and say "make it better", I'll get right on that vague statement: add unit tests, refactor stuff, make sure there's suitable levels of abstraction… but since when are those business goals?

Point me at a codebase and not give me any direction at all? I'll use the app (or whatever) for a bit, see what feels like a bug, try to fix that, then twiddle my thumbs.

> But currently you communicate that with a human that then translates your needs to an LLM, why do we still need that middleman?

As I'm a software developer, I think I'd be the middle-man here, surely?

> There are so many well described problems and projects out there already, why can't you connect an LLM to them and make it complete those projects on its own?

Quality. The better ones I've tried (most recently o1) seem to be slightly better than a recent graduate, but not senior level. The code works, but only most of the time, and like a junior it will need help.


> Otherwise why would you need a human to do skillful prompting? Why can't the AI just solve it on its own?

Solve what? It needs to understand what you want, that's what skillful prompting is for.


> zero shot translation shows the models really develop an interlingua, a semantic representation, otherwise they wouldn't be able to translate between unseen pairs of languages

In the zero shot language tasks the following were trained:

English -> Japanese

Japanese -> English

English -> Korean

Korean -> English

And the zero shot language tasks were:

Japanese -> Korean

Korean -> Japanese

It seems that this proves there is a transitive property to language. Making the jump from trained Japanese -> English -> Korean and Korean -> English -> Japanese to Japanese -> Korean and Korean -> Japanese could be accomplished without an independent semantic representation, taking advantage only of transitive properties. It would require some earlier layers in the network coding for some Japanese -> English and some later layers in the network coding for some English -> Korean with the equivalent for transitive Korean -> Japanese. I'm not sure it proves the development of an independent semantic representation outside the direct language translation. Or at least independent semantic representation is a much stronger claim.


I think what this all is showing is that language itself is the key invention at making something reason.

The next step is to get it working with math symbols (I'd think they'd have already done that).

Then it's to combine all the ways we 'cheat' at reasoning (street signs, crochet, secret hand shakes, whatever) and see how multiple methods in combination work together.

Then, I'd think, we have these AIs come up with new ways to reason that we don't have yet. Once you get the 'theory' down with a lot of examples, it should be able to hallucinate new ones.

Then, well, buckle up I guess


I don’t think language is the key, but it certainly is A key and we can go as far as agi with this key.


>- the human in the room is also important. LLMs are like pianos, we play prompts on the keyboard to them, and they play back language to us. The quality and originality of the output aren't solely inherent to the model, but are co-created in the dialogue with the prompter. It's not just about the instrument, but also the 'musician' playing it.

I mean isn't that what humans are as well? You can get two LLMs talking to each other and suddenly the musician in the room doesn't matter.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: