Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We don’t know that LLMs generating tokens for scenarios involving simulations of conscious don’t already involve such experience. Certainly such threads of consciousness would currently be much less coherent and fleeting than the human experience, but I see no reason to simply ignore the possibility. To whatever degree it is even coherent to talk about the conscious experience of others than yourself (p-zombies and such), I expect that as AIs’ long term coherency improves and AI minds become more tangible to us, people will settle into the same implicit assumption afforded to fellow humans that there is consciousness behind the cognition.


The very tricky part then is to ask if the consciousness/phenomenological experience that you postulate still happens if, say, we were to compute the outputs of an LLM by hand… while difficult, if every single person on earth did one operation per second, plus some very complicated coordination and results gathering, we could probably predict a couple of tokens for an LLM at some moderate frequency… say, a couple of tokens a month? a week? A year? A decade? Regardless… would that consciousness still have an experience? Or is there some threshold of speed and coherence, or coloration that would be missing and result in failure for it to emerge?

Impossible to answer.

Btw I mostly think it’s reasonable to think that there might be consciousness, phenomenology etc are possible in silicon, but it’s tricky and unverifiable ofc.


> would that consciousness still have an experience?

If the original one did, then yes, of course. You're performing the exact same processing.

Imagine if instead of an LLM the billions of people instead simulated a human brain. Would that human brain experience consciousness? Of course it would, otherwise they're not simulating the whole brain. The individual humans performing the simulation are now comparable to the individual neurons in a real brain. Similarly, in your scenario, the humans are just the computer hardware running the LLM. Apart from that it's the same LLM. Anything that the original LLM experiences, the simulated one does too, otherwise they're not simulating it fully.


You are assuming that consciousness can be reproduced by simulating the brain. Which might be possible but it's by no means certain.


You can simulate as much of the human as you need to. So long as consciousness is a physical process (or an emergent property of a physical process), it can be simulated.

The notion that it is not a physical process is an extraordinary claim in its own right, which itself requires evidence.


You can simulate as much of an aircraft as you need to. So long as flying is a physical process, it can be simulated.

But your simulation will never fly you over an ocean, it will never be an aircraft or do what aircraft do. A simulation of heat transfer will not cook your dinner. A simulation of Your assumption that a simulation of a mind is a mind, requires evidence.


> But your simulation will never fly you over an ocean

It will fly over a simulated ocean just fine. It does exactly what aircraft do, within the simulation. By adding “you” to the sentence you've made it an apples to oranges comparison because “you” is definitionally not part of the simulation. I don't see how you could add the same “you” to “it will simulate consciousness just fine”.


> "It does exactly what aircraft do"

It doesn't move real Oxygen and Nitrogen atoms, it doesn't put exhaust gas into the air over the ocean, it doesn't create a rippling sound and pressure wave for a thousand miles behind it, it doesn't drain a certain amount of jet fuel from the supply chain or put a certain amount of money in airline and mechanics' pockets, it doesn't create a certain amount of work for air traffic controllers... reductio ad abusurdum is that a flipbook animation of a stickman aircraft moving over a wiggly line ocean is a very low granularity simulation and "does exactly what aircraft do" - and obviously it doesn't. No amount of adding detail to the simulation moves it one inch closer to doing 'exactly what aircraft do'.

> "I don't see how you could add the same “you” to “it will simulate consciousness just fine”"

by the same reductio-ad-absurdum I don't see how you can reject a stickman with a speech bubble drawn over his head as being "a low granularity simulated consciousness". More paper, more pencil graphite, and the stickman will become conscious when there's enough of it. Another position is that adding things to the simulation won't simulate consciousness just fine - won't move it an inch closer to being conscious; it will always be a puppet of the simulator, animated by the puppeteer's code, always wooden Pinocchio and never a real person. What is the difference between these two:

a) a machine with heat and light and pressure sensors, running some code, responding to the state of the world around it.

b) a machine with heat and light and pressure sensors, running some code [converting the inputs to put them into a simulation, executing the simulation, converting the outputs from the simulation], and using those outputs to respond to the state of the world around it.

? What is the 'simluate consciousness' doing here at all, why is it needed? To hide the flaw in the argument; it's needed to set up the "cow == perfectly spherical massless simulated cow" premise which makes the argument work in English words. Instead of saying something meaningful about consciousness, one states that "consciousness is indistinguishable from perfectly spherical massless simulated consiousness" and then states "simply simulate it to as much detail as needed" and that allows all the details to be handwaved away behind "just simulate it even more (bro)".

Pointing out that simulations are not the real thing is the counter-argument. Whether or not the counter-argument can be made by putting "you" into a specific English sentence is not really relevant, that's only to show that the simulated aircraft doesn't do what the real aircraft does. A simulated aircraft flying over a simulated ocean is no more 'real' than drawing two stick figures having a conversation in speech bubbles.


You just wrote a lot of text just to say that you don't accept the simulation as “real”.

That's just semantics. I'm not here to argue what the word “real” means. Of course you can define it in such a way that the simulated aircraft isn't “really” flying over an ocean, and it would be just as valid as any other definition, but it doesn't say anything meaningful or insightful about the simulation.

Nobody contests your point that the simulated aircraft isn't going over a real ocean and isn't generating work for real-life air traffic controllers. But conversely you don't seem to contest the claim that oceans and air traffic controllers could be simulated, too. Therefore, consciousness can be simulated as well, and it would be a simulated consciousness that just doesn't fall into your definition of “real”.


You need to clearly define what constitutes "real" before we can meaningfully talk about the distinction between "real" atoms and simulated ones.

As far as physics go, it's all just numbers in the end. Indeed, the more we keep digging into the nature of reality, the more information theory keeps popping up - see e.g. the holographic principle.


> "As far as physics go, it's all just numbers in the end."

No it isn't; numbers are a map, maps are not the territory. You are asking me to define how a map is different from a city, but you are not accepting that the city is made of concrete and is square kilometers large and the map is made of paper and is square centimeters large as a meaningful difference, when I think it's such an obvious difference it's difficult to put any more clearly.

What constitutes a real atom: a Hydrogen atom capable of combining with Oxygen to make water, capable of being affected by the magnetic field of an MRI scanner, etc.

What constitutes a simulated atom: a pattern of bits/ink/numbers which you say "this is a representation of a Hydrogen atom", capable of nothing, except you putting some more bits/ink/numbers near it and speaking the words "this is it interacting to make simulated water".


Ok, you are saying that a map is different than the territory. That a simulation is meaningfully different.

Do you deny that you could be in a simulation right now, in the matrix? What you actually think are are molecules of oxygen are actually simulated molecules. That there is no way for you to every tell the difference.


Is simulate the right word there? With a hundred trillion connections between 80 billion neurons, it seems unlikely that it would ever be worth simulating a human brain, because it would be simpler to just build one than to assemble a computer complex enough to simulate it.


Yes that’s my main point - if you accept the first one, then you should accept the second one (though some people might find the second so absurd as to reject the first).

> Imagine if instead of an LLM the billions of people instead simulated a human brain. Would that human brain experience consciousness? Of course it would, otherwise they're not simulating the whole brain.

However, I don’t really buy “of course it would,” or in another words the materialist premise - maybe yes, maybe no, but I don’t think there’s anything definitive on the matter of materialism in philosophy of mind. as much as I wish I was fully a materialist, I can never fully internalize how sentience can uh emerge from matter… in other words, to some extent I feel that my own sentience is fundamentally incompatible with everything I know about science, which uh sucks, because I definitely don’t believe in dualism!


It would certainly with sufficient accuracy honestly say to you that it's conscious and believes it whole heartily, but in practice it would need to a priori be able describe external sense data, as it's not separate necessarily from the experiences, which intrinsically requires you to compute in the world itself otherwise it would only be able to compute on, in a way it's like having edge compute at the skins edge. The range of qualia available at each moment will be distinct to each experiencer with the senses available, and there likely will be some overlap in interpretation based on your computing substrate.

We in a way can articulate the underlying chemputation of the universe mediated through our senses, reflection and language, turn a piece off (as it is often non continuous) and the quality of the experience changes.


But do you believe in something constructive? Do you agree with Searle that computers calculate? But then numbers and calculation are immaterial things that emerge from matter?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: