Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> When it comes to the various kinds of thought-processes that humans engage in (linguistic thinking, logic, math, etc) I agree that you can describe things in terms of functions that have definite inputs and outputs.

Function can mean inputs-outputs. But it can also mean system behaviors.

For instance, recurrence is a functional behavior, not a functional mapping.

Similarly, self-awareness is some kind of internal loop of information, not an input-output mapping. Specifically, an information loop regarding our own internal state.

Today's LLMs are mostly not very recurrent. So might be said to be becoming more intelligent (better responses to complex demands), but not necessarily more conscious. An input-output process has no ability to monitor itself, no matter how capable of generating outputs. Not even when its outputs involve symbols and reasoning about concepts like consciousness.

So I think it is fair to say intelligence and consciousness are different things. But I expect that both can enhance the other.

Meditation reveals a lot about consciousness. We choose to eliminate most thought, focusing instead on some simple experience like breathing, or a concept of "nothing".

Yet even with this radical reduction in general awareness, and our higher level thinking, we remain aware of our awareness of experience. We are not unconscious.

To me that basic self-awareness is what consciousness is. We have it, even when we are not being analytical about it. In meditation our mind is still looping information about its current state, from the state to our sensory experience of our state, even when the state has been reduced so much.

There is not nothing. We are not actually doing nothing. Our mental resting state is still a dynamic state we continue to actively process, that our neurons continue to give us feedback on, even when that processing has been simplified to simply letting that feedback of our state go by with no need to act on it in any way.

So consciousness is inherently at least self-awareness in terms of internal access to our own internal activity. And that we retain a memory of doing this minimal active or passive self-monitoring, even after we resume more complex activity.

My own view is that is all it is, with the addition of enough memory of the minimal loop, and a rich enough model of ourselves, to be able to consider that strange self-awareness looping state afterwards. Ask questions about its nature, etc.



LLMs are recurrent in the sense that you describe, though, since every token of output they produce is fed back to them as input. Indeed, that is why reasoning models are possible in the first place, and it's not clear to me why the chain-of-thought is not exactly that kind of "internal loop of information" that you mention.

> Meditation reveals a lot about consciousness. We choose to eliminate most thought, focusing instead on some simple experience like breathing, or a concept of "nothing".

The sensation of breathing still constitutes input. Nor is it a given that a thought is necessarily encodeable in words, so "thinking about concept of nothing" is still a thought, and there's some measurable electrochemical activity encoding that in the brain which encodes it. In a similar vein, LLMs deal with arbitrary tokens, which may or may not encode words - e.g. in multimodal LMs, input includes tokens encoding images directly without any words, and output can similarly be non-word tokens.


> chain-of-thought is not exactly that kind of "internal loop of information" that you mention.

It is, but (1) the amount of looping in models today is extremely trivial. if our awareness loop is on the order of milliseconds, we experience it on the order of thousands of milliseconds at a minimum. And consider and consolidate our reasoning about experiences over minutes, hours, even days. Which would be thousands to many millions of iterations of experiential context.

Then (2), the looping of models today is not something the model is aware of at a higher level. It processes the inputs iteratively, but it isn't able to step back and examine its own responses recurrently at a second level in a different indirect way.

Even though I do believe models can reason about themselves and behave as if they did have that higher functionality.

But their current ability to reason like that has been trained into them by human behavior, not learned independently by actually monitoring their own internal dynamics. They cannot yet do that. We do not learn we are conscious, or become conscious, by parroting others conscious enabled reasoning. A subtle but extremely important difference.

Finally, (3) they don't build up a memory of their internal loops, much less a common experience from a pervasive presence of such loops.

Those are just three quite major gaps.

But they are not fundamental gaps. I have no doubt that future models will become conscious as limitations are addressed.


This is what I wrote while I was thinking about the same topic before I can across your excellent comment; as if it’s a summary of what you just said:

Consciousness is nothing but the ability to have internal and external senses, being able to enumerate them, recursively sense them, and remember the previous steps. If any of those ingredients are missing, you cannot create or maintain consciousness.


Thanks. I do believe that is a good summary of what I was saying.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: