Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> chain-of-thought is not exactly that kind of "internal loop of information" that you mention.

It is, but (1) the amount of looping in models today is extremely trivial. if our awareness loop is on the order of milliseconds, we experience it on the order of thousands of milliseconds at a minimum. And consider and consolidate our reasoning about experiences over minutes, hours, even days. Which would be thousands to many millions of iterations of experiential context.

Then (2), the looping of models today is not something the model is aware of at a higher level. It processes the inputs iteratively, but it isn't able to step back and examine its own responses recurrently at a second level in a different indirect way.

Even though I do believe models can reason about themselves and behave as if they did have that higher functionality.

But their current ability to reason like that has been trained into them by human behavior, not learned independently by actually monitoring their own internal dynamics. They cannot yet do that. We do not learn we are conscious, or become conscious, by parroting others conscious enabled reasoning. A subtle but extremely important difference.

Finally, (3) they don't build up a memory of their internal loops, much less a common experience from a pervasive presence of such loops.

Those are just three quite major gaps.

But they are not fundamental gaps. I have no doubt that future models will become conscious as limitations are addressed.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: