Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's arguably the most interesting about the discussion on LLMs. Can't they reason? If they do, reason is an emerging fonction. 2 years ago, before inference was truly added to LLMs,a Honk Kong university paper wrote that ChatGPT 3.5 reasoning was correct 64% of the time on a specific task (you asked it to make the same reasonning 100 time, 36% of the time it would be wrong for no reason). I'd like to see how modern LLMs with added inference matrices and a lot of helpers before the actual transformers perform on this test.

If a consensus arrives in 5 years and we decided that yes, LLMs can in fact reason, reasonning would be an emerging capacity, and that would be incredibly interesting.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: