Chinese room is a famous thought experiment by John Searle which purports to show that a machine, no matter how complex, cannot be conscious. It replaces a machine by a regular conscious person who, however, is partly unconscious — unaware — of some area of knowledge, such as Chinese language. Still, this person (hidden in a closed room) can communicate in perfect Chinese with anyone on any topic simply by following a (supposedly large and complex) set of rules that “correlate one set of formal symbols with another set of formal symbols” (i.e., English with Chinese) that someone else has created for him. According to Searle, this paradox — that you may not know a single Chinese word and yet pass for a fluent Chinese speaker by adhering to a set of rules— proves that even if a machine appears conscious and passes Turing test with flying colors, it is still not really conscious, not any more than the person in the room really knows Chinese.
One immediate problem with this argument is that speaking Chinese, difficult as it may be, and appearing conscious and intelligent are tasks that aren’t quite analogous. To know Chinese, you just need to know Chinese; to appear minimally intelligent, you have to know a lot more about a lot more things. It may be argued that, for all its complexity, any language is ultimately a formal system that can be described by rules; for intelligence, this is much less clear. However, I’m not going to pursue this line of attack — I have something better.
There’s another problem: at best, this argument proves that a seemingly-conscious machine can, but not necessarily must, be non-conscious. If it targets AI proponents as religious believers, all it can do is turn them into agnostics, not true atheists. If “just following some rules” is Searle’s definition of “being not really conscious,” then he must first show that we conscious humans are not, ultimately, just following rules ourselves. This argument certainly doesn’t accomplish that. “I feel that I am not a machine” can easily be a delusion: the man in the Chinese room, if he’s never seen any real Chinese speaker, may well think that what he does is speaking Chinese — that all other Chinese he’s communicating with are also English-speaking people who sit in their rooms with similar sets of rules. To me, this is a serious flaw of the argument — but it’s not what dooms it.
Searle purports to demonstrate how what appears on the outside (someone can speak Chinese, an AI is conscious) may contradict what really is (cannot and is not). But even if we accept Searle’s propositions on what is and what appears, the argument still fails because these two facts have no common base — they apply to different entities. To an outside observer, it’s not “the person in the room” who speaks Chinese: it is the room as a whole, with all its rule books and vocabularies. And whoever authored all those ingenious books, it surely wasn’t the guy who is currently in the room using them.
That is the real problem with Searle’s argument. The person in the Сhinese room is simply irrelevant. In following the instructions, he makes no free will choices of his own. Any impression of Chinese prowess, for the observer, comes from the instructions. Therefore the only entity about whose intelligence we can argue is whoever made these instructions — and that entity is outside this thought experiment. The difference between a set of rules, and that same set of rules plus something that does nothing but follow them, is immaterial.
Searle’s argument is like claiming that a phone isn’t conscious when it translates someone’s intelligent responses to your questions. Surely it isn’t, but why should we be concerned with it? The phone is not who you’re talking to, even if it’s necessary for the conversation to happen.
Searle’s thought experiment totally misses the point it’s trying to (dis)prove. Which is perhaps unsurprising, given how badly its model — a live-but-dumb processor plus a dead-but-intelligent memory — reflects what’s really going on inside the entities that are, acceptedly, intelligent or speak Chinese. The Chinese room is similar to a digital computer with its CPU (that does the number crunching) and RAM (that stores programs and data) — but it’s very unlike a real brain where, for all we know, the same neural tissue works as the memory and the processor at the same time.