Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Talking about the debt of a system prompt feels really weird. A system prompt tied to an LLM is the equivalent of crafting a new model in the pre-LLM era. You measure their success using various quality metrics. And you improve the system prompt progressively to raise these metrics. So it feels like bandaid but that's actually how it's supposed to work and totally equivalent to "fixing" a machine learning model by improving the dataset.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: