This is great! AIUI, llama.cpp does support tools, but I haven't figured out yet what to do to make llm use it. Is there anything I can put into extra-openai-models.yaml to make this work?
Simon, at this point you have a lot of LLM-related tools, and I am not sure which one is outdated, which one is the newest, fanciest stuff, which one is the one that one should use (and when), and so forth.
Is there a blog post / article that addresses this?
Great, this works here. I wonder, with extra-openai-models.yaml I was able to set the api_base and vision/audio: true. How do I do this with the llama-server-tools plugin? vision works, but llm refuses to attach audio because it thinks the model does not support audio /which it does).
EDIT: I think I just found what I want. There is no need for the plugin, extra-openai-models.yaml just needs "supports_tools: true" and "can_stream: false".