Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We haven't coded LLMs to be stochastic models, we coded them to predict text with any method gradient decent finds on a transformer architecture. That's not exactly the same.

But more importantly, if you want to show that LLMs can't reason you obviously have to use a test that when applied to humans would show that humans can reason. Otherwise your test isn't testing reasoning but something more strict.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: