Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For me, things click into place by considering the "conversational" LLM as autocomplete applied into a theatrical script. The document contains stage direction and spoken lines by different actors. The algorithm doesn't know or care how it why any particular chunk of text got there, and if one of those sections refers to "LLM" or "You" or "Server", that is--at best--just another character name connected to certain trends.

So the LLM is never deciding what "itself" will speak next, it's deciding what "looks right" as the next chunk in a growing document compared to all the documents it was trained on.

This framing helps explain the weird mix of power and idiocy, and how everything is injection all the time.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: