Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think this will be the future. LLMs will know enough to know that it should hand things off to something else.


It's the present. ChatGPT, for example, is an application. It uses models, but it does all kinds of stuff at the application level too.


Is the instructions to pass something off built into the model or is it clever prompting or a bit of both?


In general it's "tool use" where the model's system prompt tells it to use certain tools for certain tasks, and having been trained to follow instructions, it does so!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: