Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Telling the LLM that it must do something is not a guarantee that it'll follow through.


True. This is an open area of research. Tools like guidance (or other implementations of constrained decoding with llms [1,2]) will likely help improve this problem.

[1] A guidance language for controlling large language models. https://github.com/guidance-ai/guidance

[2] Knowledge Infused Decoding https://arxiv.org/abs/2204.03084




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: