Using RAG a smaller local LLM combined with local data (e.g. your emails, iMessages etc) can be useful than a large external LLM that doesn’t have your data.
No point asking GPT4 “what time does John’s party start?”, but a local LLM can do better.
This is why I think Apple’s implementation of LLMs is going to be a big deal, even if it’s not technically as capable. Just making Siri better able to converse (e.g. ask clarifying questions) and giving it the context offered by user data will make it dramatically more useful than silo’d off remote LLMs.
No point asking GPT4 “what time does John’s party start?”, but a local LLM can do better.