Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Gemini is my favorite, but it does seem to be prone to “breaking” the flow of the conversation.

Sharing “system stuff” in its responses, responding to “system stuff”, starts sharing thoughts as responses, responses as thoughts, ignoring or forgetting things that were just said (like it’s suddenly invisible), bizarre formatting, switching languages for no reason, saying it will do something (like calling a tool) instead of doing it, getting into odd loops, etc.

I’m guessing it all has something to do with the textual representation of chat state and maybe it isn’t properly tuned to follow it. So it kinda breaks the mould but not in a good way, and there’s nothing downstream trying to correct it. I find myself having to regenerate responses pretty often just because Gemini didn’t want to play assistant anymore.

It seems like the flash models don’t suffer from this as much, but the pro models definitely do. The smarter the model to more it happens.

I call it “thinking itself to death”.

It’s gotten to a point where I often prefer fast and dumb models that will give me something very quickly, and I’ll just run it a few times to filter out bad answers, instead of using the slow and smart models that will often spend 10 minutes only to eventually get stuck beyond the fourth wall.





> ignoring or forgetting things that were just said (like it’s suddenly invisible)

This sounds like an artifact of the Gemini consumer app, some others may be too (the model providers are doing themselves a disservice by calling them the same).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: