Hacker News new | past | comments | ask | show | jobs | submit login

kagi is searching the web for you, and then injecting the results into the context of the prompt.



Are there any downsides to that approach? It seems like we're moving towards empowering llm's to interact with stuff as if that's better than us doing it for them - is it really?

Eg say I want to build an agent to make decisions, shall I write some code to insert the data that informs the decision into the prompt, return structured data, and then write code to implement the decision?

Or should I empower the llm do those things with function calls?


If you want deeper search, it needs to be able to iterate, plan, reason while searching.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: