You can simulate the Golden Gate Bridge on a sufficiently powerful computer, though, such that the simulation will behave exactly like the real thing.
Chinese room is a good example of begging the question, since the postulate that "understanding" is somehow distinct from observable behavior of the system implies the outcome. But from a materialist perspective, Chinese room, considered as a whole, does understand what it does. The fact that the man inside of it does not is simply irrelevant to the question.
>Chinese room, considered as a whole, does understand what it does. The fact that the man inside of it does not is simply irrelevant
Searle does address this point even in the original paper. That argument doesn't hold water because you can imagine taking the whole room and putting it in your head and then you still don't understand Chinese. Or put differently if you're a Mandarin speaker and we two sit in a room and I use you to secretly translate, you understand the meaning of what is being said, I don't and it doesn't mean anything to say we "as a system do".
The point is that even though we can "as a system" behave as if we speak Mandarin, there's a difference between you and me. You understand what you're talking about, and I just hear gibberish. Searle is a die-hard materialist by the way, nothing of that violates materialism. What he isn't is a functionalist. What he is teasing out in the thought experiment is that a system that produces the same output as nother system does not need to be equivalent on the inside.
That is still nonsense. If you take the whole room as a system and integrate it into your brain as a subsystem, then yes, you will understand Chinese.
The reason why we have to speak of the room and the person inside as a system is because the real magic is in the instructions. The fact that they are performed by a self-aware human is completely irrelevant to the setup and is only there to confuse the matter.
In your other example with two people, viewing them as a system doesn't make much sense because one of those people is redundant - you can leave just the person who speaks Mandarin, and that is sufficient for the whole to function. So they alone are "the system". And it also operates based on instructions, except that those instructions are stored in the person's head and executed by low-level processes in the brain.
Searle believes that consciousness cannot be simulated as a digital computation, period. Given that any other physical process can be, this requires a belief that consciousness is somehow magically different from any other physical process in some unspecified way (that appears to be conjured out of thin air solely to make this one argument, at that). That is not materialism.
>The fact that they are performed by a self-aware human is completely irrelevant to the setup
Granted, but we can easily create a more clear example that addresses both of your objections, right? Say in the room is a colorblind person with a machine that detects properties of color.
If someone now asks you questions about colored objects you can answer them, but I assume you grant that neither the colorblind person, nor the machine, nor the two as a system have conscious experiences of color vision as you have. The conscious experience has nothing to do with function. Every physical property you can describe without necessary experiencing any of it.
And I don't think your assertion about Searle's belief is correct. (or at least I don't believe that). If you fully simulated a physical brain, down to the atom, I think the experience in the simulation is probably equivalent to the experience outside. But if you merely model outward functions of conscious agents, behavior that is, there's no reason at all to assume all those systems must be conscious or have experiences.
> If someone now asks you questions about colored objects you can answer them, but I assume you grant that neither the colorblind person, nor the machine, nor the two as a system have conscious experiences of color vision as you have.
It really depends on the setup. If the system is primed with knowledge of what color various things are (so e.g. it can say that grass is green because it is in the knowledge base), then, no, it does not experience color vision. It's just regurgitating facts.
On the other hand, if you actually have some kind of sensor that is capable of perceiving color, and you provide the output of that sensor to the colorblind person inside the room, who interprets the signals (say, represented as numbers) according to the rules, and those rules result in the system as a whole being able to say things like "apple is red" when presented with a red apple, then yes, I would in fact argue that the system does consciously experience color vision.
> And I don't think your assertion about Searle's belief is correct.
Searle claimed that computers "merely" use syntactic rules to manipulate symbol strings, but have no "understanding" of semantics, and that Chinese room demonstrates that this is not sufficient for consciousness. This was not just about correctly modelling outward functions, though - quite obviously, the room has a lot going on inside, and of course you can model neural nets without physically simulating neurons, either. Quite frankly Searle's attempt to make some kind of qualitative distinction between biology and computation is nonsensical, because it's the same physics all the way down, and it is all representable as computation.
Chinese room is a good example of begging the question, since the postulate that "understanding" is somehow distinct from observable behavior of the system implies the outcome. But from a materialist perspective, Chinese room, considered as a whole, does understand what it does. The fact that the man inside of it does not is simply irrelevant to the question.