Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It isn’t for every application, but I’ve used it for tasks like extraction, summarization and generating commands where you have specific constraints you’re trying to meet.

Most important to me is that I can write evaluations based on feedback from the team and build them into the pipeline using suggestions and track them with LLM as a judge (and other) metrics. With some of the optimizers, you can use stronger models to help propose and test new instructions for your student model to follow, as well as optimize the N shot examples to use in the prompt (MIPROv2 optimizer).

It’s not that a lot of that can’t be done other ways, but as a framework it provides a non-trivial amount of value to me when I’m trying to keep track of requirements that grow over time instead of playing the whack a mole game in the prompt.



Yeah so like I said, I get the in-context optimization bit…which is nice, but pretty limited.

I have had precisely zero success with the LLM-prompt-writer elements. I would love to be wrong, but DSpy makes huge promises and falls painfully short on basically all of them.

I do not see any reason to use it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: