Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> If someone now asks you questions about colored objects you can answer them, but I assume you grant that neither the colorblind person, nor the machine, nor the two as a system have conscious experiences of color vision as you have.

It really depends on the setup. If the system is primed with knowledge of what color various things are (so e.g. it can say that grass is green because it is in the knowledge base), then, no, it does not experience color vision. It's just regurgitating facts.

On the other hand, if you actually have some kind of sensor that is capable of perceiving color, and you provide the output of that sensor to the colorblind person inside the room, who interprets the signals (say, represented as numbers) according to the rules, and those rules result in the system as a whole being able to say things like "apple is red" when presented with a red apple, then yes, I would in fact argue that the system does consciously experience color vision.

> And I don't think your assertion about Searle's belief is correct.

Searle claimed that computers "merely" use syntactic rules to manipulate symbol strings, but have no "understanding" of semantics, and that Chinese room demonstrates that this is not sufficient for consciousness. This was not just about correctly modelling outward functions, though - quite obviously, the room has a lot going on inside, and of course you can model neural nets without physically simulating neurons, either. Quite frankly Searle's attempt to make some kind of qualitative distinction between biology and computation is nonsensical, because it's the same physics all the way down, and it is all representable as computation.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: