Chinese room: what does it disprove?

Chinese room is a famous thought experiment by John Searle which purports to show that a machine, no matter how complex, cannot be conscious. It replaces a machine by a regular conscious person who, however, is partly unconscious — unaware — of some area of knowledge, such as Chinese language. Still, this person (hidden in a closed room) can communicate in perfect Chinese with anyone on any topic simply by following a (supposedly large and complex) set of rules that “correlate one set of formal symbols with another set of formal symbols” (i.e., English with Chinese) that someone else has created for him. According to Searle, this paradox — that you may not know a single Chinese word and yet pass for a fluent Chinese speaker by adhering to a set of rules— proves that even if a machine appears conscious and passes Turing test with flying colors, it is still not really conscious, not any more than the person in the room really knows Chinese.

One immediate problem with this argument is that speaking Chinese, difficult as it may be, and appearing conscious and intelligent are tasks that aren’t quite analogous. To know Chinese, you just need to know Chinese; to appear minimally intelligent, you have to know a lot more about a lot more things. It may be argued that, for all its complexity, any language is ultimately a formal system that can be described by rules; for intelligence, this is much less clear. However, I’m not going to pursue this line of attack — I have something better.

There’s another problem: at best, this argument proves that a seemingly-conscious machine can, but not necessarily must, be non-conscious. If it targets AI proponents as religious believers, all it can do is turn them into agnostics, not true atheists. If “just following some rules” is Searle’s definition of “being not really conscious,” then he must first show that we conscious humans are not, ultimately, just following rules ourselves. This argument certainly doesn’t accomplish that. “I feel that I am not a machine” can easily be a delusion: the man in the Chinese room, if he’s never seen any real Chinese speaker, may well think that what he does is speaking Chinese — that all other Chinese he’s communicating with are also English-speaking people who sit in their rooms with similar sets of rules. To me, this is a serious flaw of the argument — but it’s not what dooms it.

Searle purports to demonstrate how what appears on the outside (someone can speak Chinese, an AI is conscious) may contradict what really is (cannot and is not). But even if we accept Searle’s propositions on what is and what appears, the argument still fails because these two facts have no common base — they apply to different entities. To an outside observer, it’s not “the person in the room” who speaks Chinese: it is the room as a whole, with all its rule books and vocabularies. And whoever authored all those ingenious books, it surely wasn’t the guy who is currently in the room using them.

That is the real problem with Searle’s argument. The person in the Сhinese room is simply irrelevant. In following the instructions, he makes no free will choices of his own. Any impression of Chinese prowess, for the observer, comes from the instructions. Therefore the only entity about whose intelligence we can argue is whoever made these instructions — and that entity is outside this thought experiment. The difference between a set of rules, and that same set of rules plus something that does nothing but follow them, is immaterial.

Searle’s argument is like claiming that a phone isn’t conscious when it translates someone’s intelligent responses to your questions. Surely it isn’t, but why should we be concerned with it? The phone is not who you’re talking to, even if it’s necessary for the conversation to happen.

Searle’s thought experiment totally misses the point it’s trying to (dis)prove. Which is perhaps unsurprising, given how badly its model — a live-but-dumb processor plus a dead-but-intelligent memory — reflects what’s really going on inside the entities that are, acceptedly, intelligent or speak Chinese. The Chinese room is similar to a digital computer with its CPU (that does the number crunching) and RAM (that stores programs and data) — but it’s very unlike a real brain where, for all we know, the same neural tissue works as the memory and the processor at the same time.

Advertisements

2 thoughts on “Chinese room: what does it disprove?

  1. Good points. The computer is like the room. Somebody on the outside built in the rules that operate on the inputs. But this too could have been another computer. The question is can we ever prove that the regress has to stop somewhere at some genuine consciousness? If you are a dualist like me the answer would be yes. But if you reject any dualism, then the answer is no.

    • I think the regression does need to stop somewhere, because the world is finite (for practical purposes at least). But, more controversially, I don’t think it needs to necessarily stop at a “genuine consciousness” as opposed to non-genuine intermediaries. Rather, to me consciousness and intelligence (and free will, for that matter) are as subjective as ethics (“no ought from is”), and therefore to a large degree nebulous and relative. Turing test works because only other intelligent entities can judge an entity’s intelligence (and different Turing testers may well disagree, and never settle their differences); it’s not something you can objectively measure.

      I think there may well be some _added_ intelligence in a book or computer which isn’t there in its creator. So, by tracing the chain of books or computers, we may discover that our impression of intelligence may in fact be something that accumulates along this chain, with each link adding a bit, and the origin of the chain being not very intelligent, if at all. Of course this is largely a speculation; in the world where we live at the moment, it’s quite easy to classify entities into intelligent and non-intelligent, and to trace where (from whom) some bit of intelligence eventually comes from. But as computers become more complex and powerful, I think it will be getting more and more difficult, with boundaries blurring. Even now, simple algorithms sometimes act in an uncannily human-like manner; my point is that it’s not an “illusion” – it is indeed very small and limited bits of intelligence of the same kind that we ourselves possess. It’s all a matter of degrees and scales as to what to consider “genuine”.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s