Hacker News new | past | comments | ask | show | jobs | submit login

But can rhis algorithm ever produce reasoning without learning a whole universe of possible inputs?

Given the evidence that it fails to learn arithmetic, skips inference steps, misassigns symbols, I'd say likely not.




Reasoning is abstracted from particulars. So in principle what it needs to learn is a finite set of rules. There are good reasons that explain why current LLMs don't learn arithmetic and has odd failure modes: it's processing is feed-forward (non-recursive) with a fixed computational budget. This means that it in principle cannot learn general rules for arithmetic which involve unbounded carrying. But this is not an in principle limitation for LLMs or gradient descent based ML in general.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: