The fundamental concept behind LLMs is to allow the model to autonomously deduce concepts, rather than explicitly engineering solutions into the system.
The fundamental concept is to learn the statistics of text, and in the process it models the syntax via long-range connections successfully. There is no indication that it actively generates "concepts" or that it knows what concepts are. In fact the model is not self-reflective at all, it cannot observe its own activation or tell me anything about it.
I'm still waiting for someone to prove beyond a shadow of doubt that humans have a single one of these features we're debating about the presence or absence of in LLMs.
there is no way to prove because those are subjective to humans. LLMs would have to at least show they have a subjective view (currently the 'internal world' they report is inconsistent)
"concept" is ill-defined , it s a subjective thing that humans invented. It is probably not possible to define it without a sense (a definition) of self.
The fundamental concept behind LLMs is to allow the model to autonomously deduce concepts, rather than explicitly engineering solutions into the system.