Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah to some degree that's already happened. Anecdotally I hear giving your whole iMessage history to Gemini results in pretty reasonable results, in terms of the AI understanding who the people in your life are (whether doing so is an overall good idea or not).

I think there is some degree of curation that remains necessary though, even if context windows are very large I think you will get poor results if you spew a bunch of junk into context. I think this curation is basically what people are referring to when they talk about Context Engineering.

I've got no evidence but vibes, but in the long run I think it's still going to be worth implementing curation / more deliberate recall. Partially because I think we'll ultimately land on on-device LLM's being the norm - I think that's going to have a major speed / privacy advantage. If I can make an application work smoothly with a smaller, on device model, that's going to be pretty compelling vs a large context window frontier model.

Of course, even in that scenario, maybe we get an on device model that has a big enough context window for none of this to matter!



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: