Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> its not looking back at what was just generated

It is, though. The LLM gets the full history in every prompt until you start a new session. That's why it gets slower as the conversation/context gets big.

The developer could choose to rewrite or edit the history before sending it back to the LLM but the user typically can't.

> There's no guarantee everything stays the same except the mistake

Sure, but there's no guarantee about anything it will generate. But that's a separate issue.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: