>Only that there's something special about subjective experience that wouldn't arise naturally in an NN approaching even animal intelligence.
That isn't at all what I've said. I'm saying that 'qualia' exist and that we have no clue how they arise. Maybe they arise from complicated enough systems, maybe they don't. Hell, maybe panpsychists are right and even a rock has some sort of consciousness. My issue is with people who are confident that a big enough NN necessarily has consciousness.
>I assume that "subjective experience" has some observable consequences, of which you can form memories. Being able to swap out parts of a brain will allow you to have a different subjective experience and then compare them. It is an experimental tool. I don't know what you will observe since that experiment has not been performed.
Unless you presuppose that there is some part that completely determines subjective experience (I don't think it'd even be possible to identify such a part if it existed), I don't see how that would work. Yes, you can swap out a part and see that your subjective experience changes, but this tells you nothing about the subjective experience of others.
If by qualia you mean slight differences in information processing in human brains, then sure. If you mean anything more than that I would like a) a better definition than the one I have given b) some observational evidence for its existence.
> My issue is with people who are confident that a big enough NN necessarily has consciousness.
Not necessarily, just potentially. After all there will be many inefficient/barely-better-than-previously/outright detective big NNs on the path to AGI.
If you're asking whether an intelligent NN will automatically be conscious then it depends on what we mean by "intelligent" and "conscious". A mathematical theorem prover may not need many facilities that a human mind has even though it still has to find many highly abstract and novel approaches to do its work. On the other hand an agent interacting with the physical world and other humans will probably benefit from many of the same principles and the mix of them is what we call consciousness. One problem with "consciousness" is that it's such an overloaded term. I recommend decomposing it into smaller features that we care about and then we can talk about whether another system has them.
> Hell, maybe panpsychists are right and even a rock has some sort of consciousness.
If we twist words far enough then of course they do. They are following the laws of physics after all which is information processing, going from one state to another. But then all physical systems do that and its usually not the kind of information processing we care that much about when talking about intelligences. Technically correct given the premise but useless.
> I don't think it'd even be possible to identify such a part if it existed
We're already making the assumption we have the technology to simulate a brain. If you have that ability you can also implement any debugging/observational tooling you need. AI research is not blind, co-developing such tooling together with the networks is happening today. https://openai.com/blog/introducing-activation-atlases/
>If by qualia you mean slight differences in information processing in human brains, then sure. If you mean anything more than that I would like a) a better definition than the one I have given b) some observational evidence for its existence.
Subjective experiences i.e. how I actually experience sense data. There is no real, objective observational evidence and there can't be. How would you describe taste to a species of aliens that understands the processes that happen during tasting, but don't taste themselves? It's simply impossible. I know that I have personal, subjective experiences (the 'images I see' are not directly the sense data that I perceive), but I can only appeal to you emotionally to try and make you believe that it exists operating under the assumption that you too must have these experiences.
>One problem with "consciousness" is that it's such an overloaded term. I recommend decomposing it into smaller features that we care about and then we can talk about whether another system has them.
This entire discussion has been about consciousness in the philosophical meaning i.e. the ability to have some form of subjective experiences.
>If we twist words far enough then of course they do. They are following the laws of physics after all which is information processing, going from one state to another. But then all physical systems do that and its usually not the kind of information processing we care that much about when talking about intelligences. Technically correct given the premise but useless.
This isn't about twisting words, some people genuinely believe that everything is conscious with more complex system being more conscious.
>We're already making the assumption we have the technology to simulate a brain. If you have that ability you can also implement any debugging/observational tooling you need. AI research is not blind, co-developing such tooling together with the networks is happening today
The point is that it's about _subjective_ experiences.
Fish tastes like fish because the taste is a categorizing representation of that sensory input.
What you can do is today is start with a feature map. We can do that with colors https://imgs.xkcd.com/blag/satfaces_map_1024.png (do you perceive this color as red?) and we can do that with smells https://jameskennedymonash.files.wordpress.com/2014/01/table...
That's a fairly limited representation but words are an incredibly low-bandwidth interface not suitable to exporting this kind of information in high fidelity, so we can't. That does not mean it's conceptually impossible. If you wanted to export subjective experience itself then you'd need the previously mentioned debugging interface. Our brains don't have that built-in, but software does. I.e. a program can dump its entire own state and make it available to others.
To me subjective experience seems to be an intermediate representation, deep between inputs and outputs, and due to the various limitations we're bad at communicating it. That doesn't mean there's anything special about it. It is a consequence of compressing inputs into smaller spaces in ways that are useful to that entity.
> This isn't about twisting words, some people genuinely believe that everything is conscious with more complex system being more conscious.
Anything that interacts with the world will have an internal, idiosyncratic representation of that interaction. Even a rock will have momentarily vibrations traveling through it that carry some information about the world.
One of today's NNs will have a feature layers that roughly correspond to concepts that are of human interest. They're often crude approximations, but it's good enough for some use-cases.
Animal brains just have more of that.
So in that sense, sure, it's a continuum. But there's nothing mysterious about it.
>Fish tastes like fish because the taste is a categorizing representation of that sensory input.
Yes, but why does the fish taste have the taste it does? Hell, try explaining what fish tastes like, without evoking similar tastes.
>What you can do is today is start with a feature map. We can do that with colors https://imgs.xkcd.com/blag/satfaces_map_1024.png (do you perceive this color as red?) and we can do that with smells https://jameskennedymonash.files.wordpress.com/2014/01/table.... That's a fairly limited representation but words are an incredibly low-bandwidth interface not suitable to exporting this kind of information in high fidelity, so we can't. That does not mean it's conceptually impossible. If you wanted to export subjective experience itself then you'd need the previously mentioned debugging interface. Our brains don't have that built-in, but software does. I.e. a program can dump its entire own state and make it available to others.
But a feature map doesn't tell you anything about how the space itself works. If you look at that smell graph, you'll see that it uses comparisons, because it's literally impossible for us to explain what smelling is like without saying "well, it's similar to smelling x". Someone who is born without smell could memorize that chart, understand everything there is about smelling, but he wouldn't actually know what it's like to smell.
>To me subjective experience seems to be an intermediate representation, deep between inputs and outputs, and due to the various limitations we're bad at communicating it. That doesn't mean there's anything special about it. It is a consequence of compressing inputs into smaller spaces in ways that are useful to that entity.
We're not just bad at communicating it, but we're bad at understanding it, because our conventional means of measuring things doesn't really work for subjectivity. I'm not saying it's "magical", but it's not certain that we even can potentially build tools to interact with it.
> But a feature map doesn't tell you anything about how the space itself works.
The space is what is doing the work. Of course it's vastly more complex than a simple image with a few regions painted into it. There are only implementation details below it. The issue is that we cannot import and export them. With software that is a wholly different matter and they be transplanted, fine-tuned, probed and so on.
> but it's not certain that we even can potentially build tools to interact with it.
I agree that this is all very speculative, we don't have the technology and it can take a long time until we can actually inspect a human brain. But we may be able to do the same much easier to artificial intelligences, once created.
That isn't at all what I've said. I'm saying that 'qualia' exist and that we have no clue how they arise. Maybe they arise from complicated enough systems, maybe they don't. Hell, maybe panpsychists are right and even a rock has some sort of consciousness. My issue is with people who are confident that a big enough NN necessarily has consciousness.
>I assume that "subjective experience" has some observable consequences, of which you can form memories. Being able to swap out parts of a brain will allow you to have a different subjective experience and then compare them. It is an experimental tool. I don't know what you will observe since that experiment has not been performed.
Unless you presuppose that there is some part that completely determines subjective experience (I don't think it'd even be possible to identify such a part if it existed), I don't see how that would work. Yes, you can swap out a part and see that your subjective experience changes, but this tells you nothing about the subjective experience of others.