Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My take is that GPT can only reason under a proper context. The model itself doesn't have any logical capabilities, but it can extend the existing context rather-logically by mimicking the logical structures encoded into it through training. It may answer some simple questions directly, but it's already well-known that GPT performs better when ordered to do step-by-steps. Some comments here also mentions that prompt engineering is needed to get GPT to work.

That is, in other words, GPT can't reason under improper contexts, which are only few edits away from proper contexts as demonstrated in this paper. Context is not just some chunks of data that goes in and out of model, but a critical part of the reasoning capability of the model. You need both the model and a proper prompt to perform proper logical reasoning. So, it's 100% reasonable to say the model (alone) can't reason.

I think the above perspective is very critical, because it means the current LLMs are strictly tools, which is to be wielded by human, rather than actual intelligence.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: