Still learning about the landscape so can't give informed opinions. LMQL is a new one for me, will check it out.
What we're mostly going for is composability vs abstraction. What's the smallest nugget of lift we can do for you, to make it feel natural to implement what you want? In this case it's treating the calls as functions and leaning on native python features like functions, docstrings, and types, so you can still use the python language like closures to do the weird things you need.
This is all handwavy, put on my wizard language design hat, so take it with a grain of salt. We're just trying things out.
I'd like to see an injectable mitm like proxy that can rewrite payloads. Many of these frameworks are useful, but when they go off the rails, they hard to modify and introspect.
It would be nice if LLMs had a way to speak an annotated format, like XML that was able to encode higher level information in a coherent manner over "well formed" addhoc text.
LLM libraries are in a crazy state right now. It is like JS frameworks 2015, a new one that demos well every other day.
one idea we're cooking is to offer a proxy with a hosted reformatting model on-board, to rewrite payloads on their way back in the case of type parse failure. fructose, the clientside sdk, would be optional
What are the benefits of using Fructose over LMQL, Guidance or OpenAI's function calling?