Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well then time to find new theories to test. GPTs are great but clearly dont have a model of the world, self, or others because they have not been engineered in. It's probably going to take a lot of additional subsystems until this thing gets self-reflective. The hypothesis that, by scaling the giant clockwork, these things will magically emerge is .. magical and unproven.

The great thing for cognitice scientists /linguists is that we now have a quantitative, precise framework and no longer need to talk in terms of the folk intelligence science of the past.



> because they have not been engineered

The fundamental concept behind LLMs is to allow the model to autonomously deduce concepts, rather than explicitly engineering solutions into the system.


The fundamental concept is to learn the statistics of text, and in the process it models the syntax via long-range connections successfully. There is no indication that it actively generates "concepts" or that it knows what concepts are. In fact the model is not self-reflective at all, it cannot observe its own activation or tell me anything about it.


There is an indication, you can find it by clicking on this post.

The self-reflection part is probably true, but that’s not strictly necessary to understand concepts


it is important in order for us to accept it as an agent that understands things, because self-reflection is so important and obvious to us.


I'm still waiting for someone to prove beyond a shadow of doubt that humans have a single one of these features we're debating about the presence or absence of in LLMs.


there is no way to prove because those are subjective to humans. LLMs would have to at least show they have a subjective view (currently the 'internal world' they report is inconsistent)


The internal worlds of humans are inconsistent: https://en.wikipedia.org/wiki/Shadow_(psychology)


> learn the statistics

> what concepts are

How do you know concepts aren’t just statistics?


"concept" is ill-defined , it s a subjective thing that humans invented. It is probably not possible to define it without a sense (a definition) of self.


I think people are overstating the capabilities of these programs (things get confusing when software starts to pass the Turing test).

However:

> but clearly dont have a model of the world, self, or others because they have not been engineered in

Neither did we.

> The hypothesis that, by scaling the giant clockwork, these things will magically emerge is .. magical and unproven.

Our sapience was and is an emergent phenomenon, (superstition aside) was that magic?


humans have a lot more subsystems that were shaped by evolution, not just by inflating a giant cortex. Many animals have even bigger cortex but show no sign of humanlike intelligent behavior and communication


> Well then time to find new theories to test.

You're essentially requesting the goalpost be moved.


Yes. these goalposts were just a test but it's not satisfactory enough to make the AI more of a person. If that were true, ChatGPT would be allowed to participate in here

The cognitive tests we rely on (turing test, chinese room etc etc ) are woefully outdated and inadequate for our time

The goalposts will always be moved btw, because our experience of intelligence is subjective and we 'll never have an objective measure of it. At some point we will stop moving them because we ve ran out of ideas. At that point we can say we have a facsimile of our intelligence




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: