Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> because they have not been engineered

The fundamental concept behind LLMs is to allow the model to autonomously deduce concepts, rather than explicitly engineering solutions into the system.



The fundamental concept is to learn the statistics of text, and in the process it models the syntax via long-range connections successfully. There is no indication that it actively generates "concepts" or that it knows what concepts are. In fact the model is not self-reflective at all, it cannot observe its own activation or tell me anything about it.


There is an indication, you can find it by clicking on this post.

The self-reflection part is probably true, but that’s not strictly necessary to understand concepts


it is important in order for us to accept it as an agent that understands things, because self-reflection is so important and obvious to us.


I'm still waiting for someone to prove beyond a shadow of doubt that humans have a single one of these features we're debating about the presence or absence of in LLMs.


there is no way to prove because those are subjective to humans. LLMs would have to at least show they have a subjective view (currently the 'internal world' they report is inconsistent)


The internal worlds of humans are inconsistent: https://en.wikipedia.org/wiki/Shadow_(psychology)


> learn the statistics

> what concepts are

How do you know concepts aren’t just statistics?


"concept" is ill-defined , it s a subjective thing that humans invented. It is probably not possible to define it without a sense (a definition) of self.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: