Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> a story

Yeah, that framing for LLMs is one of my pet-causes: It's document generation, some documents resemble stories with characters, and everything else (e.g. "chatting" with an LLM) is an illusion, albeit an impressive and sometimes-useful one.

Being able to generate a document where humans perceive plausible statements from Santa Claus does not mean Santa Claus now lives inside the electronic box, that flying sleighs are real, etc. The principle still holds even if the character is described as "an intelligent AI assistant named [Product Name]".



I don't understand your comment. If I phrase it in the terms of your document view I'm trying to say in my comment is that even though the models can generate some documents (computer programs, answers to questions) they are terrible at generating others, such as stories.


I'm underlining that "it's a story, not a conversation" is indeed the direction we need to think in when discussing these systems, where an additional step along that direction is "it's a document which humans can perceive as a story." That's the level on which we need to engage with the problem, asking what features of a document seem wrong to us and why it might have been iteratively constructed that way.

In the opposite direction, people (understandably) fall for the illusion, and start operating under the assumption that they are "talking to" some kind of persistent entity which is capable of having goals, beliefs, or personality traits. Voodoo debugging.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: