Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Fun exercise for the reader: how much of this is actually possible with LLMs and how much is not.


Reading through all your private documents and extracting the useful information to answer an open ended question like this is still a little way off.

Specifically, LLM's can generally only take into account ~8000 words of context when deciding on a response. Summarizing the question and all necessary information for the answer into 8000 words is hard when the user might have millions of words in their inbox.

Having said that, I don't think it's far off. There are already prototypes of LLM's which have information retrieval abilities (ie. it could do a keyword search of your inbox to find a few relevant documents to read to decide on a response). There are also promising efforts to make that 8000 word number far larger.


> Fun exercise for the reader: how much of this is actually possible with LLMs and how much is not.

From what I understand about LLMs - not directly. But it may be possible to integrate LLMs with other services such that LLMs respond this way.


> Fun exercise for the reader: how much of this is actually possible with LLMs and how much is not.

I have no idea.

But it would be nice?* if this LLM stuff could be incorporated into the telephone system for my bank, pharmacy, etc.

It's obvious that these organizations don't want to connect me with a $killed human, so instead of having me interact with an infuriating "pretend" human, maybe CVS can fuse their phone system with ChatGPT to make the experience a little less maddening.

"Hey, I already gave you my birth date. No need to ask again. And I told you 30 seconds ago that I don't need to schedule a COVID vaccine. Just tell the pharmacist <xyz>."

* Be careful what you (I) wish for, I guess.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: