Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've done a lot of work over the last year wrangling LLM outputs - both from the OpenAI API as well as local LLMs.

What are the benefits of using Fructose over LMQL, Guidance or OpenAI's function calling?



Still learning about the landscape so can't give informed opinions. LMQL is a new one for me, will check it out.

What we're mostly going for is composability vs abstraction. What's the smallest nugget of lift we can do for you, to make it feel natural to implement what you want? In this case it's treating the calls as functions and leaning on native python features like functions, docstrings, and types, so you can still use the python language like closures to do the weird things you need.

This is all handwavy, put on my wizard language design hat, so take it with a grain of salt. We're just trying things out.



here's an awesome post on the landscape https://hamel.dev/blog/posts/prompt/


I remember reading that, good stuff.

I'd like to see an injectable mitm like proxy that can rewrite payloads. Many of these frameworks are useful, but when they go off the rails, they hard to modify and introspect.

It would be nice if LLMs had a way to speak an annotated format, like XML that was able to encode higher level information in a coherent manner over "well formed" addhoc text.

LLM libraries are in a crazy state right now. It is like JS frameworks 2015, a new one that demos well every other day.


one idea we're cooking is to offer a proxy with a hosted reformatting model on-board, to rewrite payloads on their way back in the case of type parse failure. fructose, the clientside sdk, would be optional




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: