Philosophical zombies react to external events in exactly the same way as normal people, including internally, but we are told they lack conscious experience. Thus the thought experiment is set up from the start to find that conscious experience is something non-physical - or else the p-zombies don't really do what they're claimed to do, which is to react identically to everyone else.
There's a dubious implication that conscious experience is completely cryptic, with no effect on the outside world (such as a person speaking the words "I consciously experienced that"), or at least that all such effects are shallow enough that they can be perfectly faked. If this is true, we ought to question why it's such a big deal. What's so great about consciousness? Why associate it with rights?
The Knowledge Argument is about a scientist who learns "everything" about colors intellectually but doesn't see them until years later, and seeing a red tomato is a revelatory experience even after all that book-learning, so it implies that experiences are beyond knowledge, or beyond physics, or beyond tomatoes or something. But really all it shows is that intellectual learning is dry and dusty and limited. Like with the p-zombies, the premise is wrong. The scientist didn't really learn everything before having the experience, but could have done in principle but for the limits of communication, description, and simulation as we know those things presently. (And then the real experience would not have had any surprising or revelatory quale about it.)
> or at least that all such effects are shallow enough that they can be perfectly faked
Physicalism implies that things we wouldn't intuitively think of as conscious can perfectly mimic all such effects.
Imagine there's a person, John, and you take a precise scan of every neuron of his brain (or every particle if you prefer). You also record all the sensory input signals from his neurons to his brain. You write all this information down in a giant stack of papers. Then you go about simulating the brain with pencil and paper, computing its thoughts and actions (in this thought experiment people have deciphered exactly how neurons work). Maybe it takes you a trillion years to simulate one day of John's life, but you diligently do it.
Physicalism tells us that you can simulate John perfectly this way. You could perfectly predict every word is said, and every muscle he moved. You could feed the motor neuron outputs of your simulation into a robot replica of John, and it would act indistinguishably from the original John.
Is this pencil and paper simulation of John a p-zombie?
We either have to accept that this pencil and paper simulation of John is conscious, or that it's a p-zombie.
That's the normal physicalist position as far as I'm aware. Is there a physicalist theory where the simulation isn't conscious?
I was replying to a commenter who doubted that a non-conscious entity could mimic a person accurately. I'm pointing out the implication that without p-zombies, just by writing a bunch of stuff on paper, you created a conscious being. I'm not arguing that's false; just that intuitively it's a tough pill to swallow and if you don't want to accept that, p-zombies are the alternative.
> Thus the thought experiment is set up from the start to find that conscious experience is something non-physical
The point is that if you accept that p-zombies are possible, then you accept that consciousness is not necessarily physical. If it's not necessarily physical, then physicalism is false.
> really all it shows is that intellectual learning is dry and dusty and limited.
What it's attempting to show is the limit of factual knowledge. If physicalism is true, then everything that can be observed must reduce to objective third person facts. But, Mary has all of the objective third person facts. So if you find it implausible that Mary would be able to infer the experience of red before actually observing a rose, even with all of those facts, then you're admitting the existence of first person subjective facts, which cannot be reduced to objective third person facts, not even in theory.
Daniel Dennett has some great responses to these challenges.
Much though I'd be interested in Daniel Dennett's responses, I don't think you understood mine. I'm saying:
Mary doesn't have all of the objective third person facts, only the ones that can be conveyed to her academically.
If you want to sweep this aside with a magic gesture, and assert that she does somehow have all the facts (alright, all the objective third person facts), you are also making the science, communication, imaginative simulation, verbal learning process, all that kind of stuff, into something magical. Because what you're saying is that it now somehow has the power to be exactly like the real experience, which in this magical scenario will thus come as no revelatory surprise to her. We only expect it to be a surprise because of realism about the limits of book-learning as we know it, because she can only learn all that is explicitly known about colors that way, which is not all there is to know about them, and is not even all that is commonly known.
But like the brain, language is Turing-complete. Any information Mary's brain can figure out can also be described in language. If the brain could deduce something that couldn't be described with math and language, it would be doing something outside of our current understanding of physics.
You could respond that even though computability theory tell us it's possible to describe in language, the description would be far too long and complicated for Mary to understand. But I think that misses the thrust of the thought experiment. Even if we imagine Mary being so smart that she could understand and absorb the full written description of the color red, it still doesn't seem like that should be the same as experiencing seeing red. Most people's intuition would be that internal experiences are categorically different than facts.
And also, intuitively, the experience of the color red doesn't seem complex. The dumbest person on Earth can easily experience it, as can a newborn baby with no knowledge to draw on (I guess assuming newborns are sentient). Even a honey bee may be able to experience it. It's such a simple thing that it seems weird to think it's theoretically possible to describe with language, but the description is too complex for humans to understand.
It fundamentally feels weird to think that any combinations of words could ever be the same as experiencing the color.
Yes, it may do. I wonder what that feeling means. All this stuff about "Mary" doesn't clarify the source of the feeling. Often a weird feeling points to a misconception.
I think language could technically fully describe the experience, but it would be an unimaginable language, and that's why the idea feels weird: it's too difficult to imagine. Also as you say a pre-verbal child can get the idea just by seeing the color (though we might speculate that the inner significance and feeling of red evolves over a lifetime). So conveying it in language is a dumb-ass way to go about conveying it.
So various experiences convey complex ideas. How? We don't absorb ideas directly from the world through our pores and palps, we create them through interaction with an existing body of ideas in our minds. How much of that is gifted to us in DNA, a kind of natural set of default ideas about the world? IDK, some, like how the brain contains a body map and the eye does movement detection (which is repurposed as edge detection via a kludge involving tiny eye movements). The brain anticipate salient features of the world being a certain way, like residing in a monkey-shaped creature on a surface with gravity where objects have edges and move around. But some ideas are conveyed, through culture, non-verbally. This may include a lot of the early significance and feeling of seeing that something is red. What is that thing? Perhaps a toy, a shoe, a flower? It's probably a special thing requesting focus, anyway, so the cultural environment is already telling you things about it, and people around you will probably encourage your focussing on it, and that's how communication of the feeling of red begins, I reckon.
Oh! That's the argument. You "reckon" a lot of things about perception here. [Apologies if I misread your tone]
"feels weird" means "seems unlikely" in this context. At the end of the day, that's the most anyone can say on either side of the argument. You can't demonstrate perception works the way you think it works. You're relying on your intuition.
There are some things we can state definitively though.
> I think language could technically fully describe the experience, but it would be an unimaginable language
If it can't be described in normal English, then the brain is doing something beyond what a Turing machine can do. That means it's doing something beyond our current understanding of physics and quantum mechanics. I would call that non-physicalist. You have to choose: either perception can be fully described in English, or the brain is doing something non-physical. Neither option is intuitive, but that's the consequence of the Church-Turing thesis we must live with.
> Mary doesn't have all of the objective third person facts, only the ones that can be conveyed to her academically.
The premise of the argument is that Mary has all of the facts, and you can even imagine that she has a super powerful computer accessible to her to perform any calculations needed. The goal is to point out that it still seems implausible that Mary could infer knowledge of the experience of redness despite having an unbounded set of dry facts about physics and biology, because to most people, qualitative experience seems like knowledge of a different kind.
That said, you're sort of on the path towards Dennett's response.
I think OP is saying that p-zombies are a pointless thought experiment, because the intended outcome is hardcoded in the premise. If you accept the premise, you ipso facto already believe that consciousness is not necessarily physical, and the experiment doesn't change that. If you reject the premise, then the experiment is nonsensical.
> I think OP is saying that p-zombies are a pointless thought experiment, because the intended outcome is hardcoded in the premise
Yes and no. The point is to actually test your intuitions around the premise and the conclusion that must follow, to clarify your own thinking around the intuitive plausibility of the conclusions and premises, hence why these are called intuition pumps. They don't prove anything definitively unless you have intuitions around consciousness that agree/disagree with the premises, but you don't necessarily fully understand your own intuitions until you think about this problem.
>If physicalism is true, then everything that can be observed must reduce to objective third person facts.
You are erroneously equating "physicalism" with "reductive physicalism". It's clear to many of us that qualia are something subjective and non-physical yet emerge from physical processes.
Yes, people are capable of convincing themselves of all kinds of nonsense. The kind of ontological emergence you would require in such a model of qualia is still of a completely different kind than any other kind of emergence, and so still requires extending the ontology of basic physicalism. There seems to be very little motivation to do so at this point. Let's not repeat the mistakes of vitalism.
That seems to be a value judgement about whether consciousness is a categorically different type of "ontological emergence" compared to others.
Of course consciousness seems more mysterious because we don't have the explanation yet. Looking down at the levels we supervene doesn't have the same mystique, though they would be just as inscrutable if trying to look up.
Going from quantum field theory to Newtonian mechanics is just as weird a departure as the reverse. And does Newtonian mechanics prepare you for the existence of Bayesian statistics? Biological agents bouncing around randomly? The gestalt of an ecosystem?
There's no reason to assert that we can't one day have the math for describing consciousness too.
> That seems to be a value judgement about whether consciousness is a categorically different type of "ontological emergence" compared to others.
It's not a value judgment to point out that objective third person facts can logically entail other objective third person facts, but cannot logically entail subjective first person facts as with qualia. That's the whole point of the hard problem. All forms of ontological emergence we've actually seen are the first sort of entailment, but to accept and explain qualia with ontological emergence, you need to accept the second kind of emergence, which is a whole new thing we've never seen and we're supposed to accept it only for consciousness because we have such a hard time accepting that our perceptions on this could be flawed. It's simply more plausible that we're mistaken about subjective first person facts IMO.
Philosophical zombies react to external events in exactly the same way as normal people, including internally, but we are told they lack conscious experience. Thus the thought experiment is set up from the start to find that conscious experience is something non-physical - or else the p-zombies don't really do what they're claimed to do, which is to react identically to everyone else.
There's a dubious implication that conscious experience is completely cryptic, with no effect on the outside world (such as a person speaking the words "I consciously experienced that"), or at least that all such effects are shallow enough that they can be perfectly faked. If this is true, we ought to question why it's such a big deal. What's so great about consciousness? Why associate it with rights?
The Knowledge Argument is about a scientist who learns "everything" about colors intellectually but doesn't see them until years later, and seeing a red tomato is a revelatory experience even after all that book-learning, so it implies that experiences are beyond knowledge, or beyond physics, or beyond tomatoes or something. But really all it shows is that intellectual learning is dry and dusty and limited. Like with the p-zombies, the premise is wrong. The scientist didn't really learn everything before having the experience, but could have done in principle but for the limits of communication, description, and simulation as we know those things presently. (And then the real experience would not have had any surprising or revelatory quale about it.)