Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Sry maybe should've being more clear it was a sarcastic remark. The whole point of doing vector db search is to feed LLM with very targeted context so you can save $ on API calls to LLM.


No worries. I should probably make sure I have at least a token understanding of the topic cloud based architecture before commenting next time haha.


That’s not the whole point it’s in the intersection of reducing tokens sent but also getting search both specific and generic enough to capture the correct context data.


It's possible to create linking documents between the documents to help smooth out things in some cases.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: