Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That’s the nature of LLMs. They can’t really think ahead to „know“ whether reasoning is required. So if it’s tuned to spit out reasoning first then that’s what it’ll do


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: