In 1980, the philosopher John Searle published in the journal Behavioral and Brain Sciences a simple thought experiment that he called the "Chinese Room Argument" against "Strong Artificial Intelligence (AI)" The thesis of Strong AI has since come to be called "computationalism," according to which cognition is just computation, hence mental states are just computational states:
Computationalism. According to computationalism, to explain how the mind works, cognitive science needs to find out what the right computations are -- the ones that the brain performs in order to generate the mind and its capacities. Once we know that, then every system that performs those computations will have those mental states: Every computer that runs the mind's program will have a mind, because computation is hardware-independent: Any hardware that is running the right program has the right computational states.
The Turing Test. How do we know which program is the right program? Although it is not strictly a tenet of computationalism, an answer that many computationalists will agree to is that the right program will be the one that can pass the Turing Test (TT), which is to be a system that is able to interact by email with real people exactly the way real people do -- so exactly that no person can ever tell that the computer program is not another real person. Turing (1950) had suggested that once a computer can do everything a real person can do so well that we cannot even tell them apart, it would be arbitrary to deny that that computer has a mind, that it is intelligent, that it can understand just as a real person can.
This, then, is the thesis that Searle set out to show was wrong: (1) Mental states are just computational states, (2) the right computational states are the ones that can pass the TT, and (3) any and every hardware on which you run those computational states will have those mental states too.
Hardware-Independence. Searle’s thought experiment was extremely simple. Normally, there is no way I can tell whether anyone or anything other than myself has mental states. The only mental states we can be sure about are our own. We can’t be someone else, to check whether they have mental states too. But computationalism has an important vulnerability in this regard: hardware-independence. Since any and every dynamical system (i.e., any physical hardware) that is executing the right computer program would have to have the right mental states, Searle himself can execute the computer program, thereby himself becoming the hardware, and then check whether he has the right mental states. In particular, Searle asks whether the computer that passes the TT really understands the emails it is receiving and sending.
The Chinese Room. To test this, Searle obviously cannot conduct the TT in English, for he already understands English. So in his thought-experiment the TT is conducted in Chinese: The (hypothetical) computer program he is testing in his thought-experiment is able to pass the TT in Chinese. That means it is able to receive and send email in Chinese in such a way that none of its (real) Chinese pen-pals would ever suspect that they were not communicating with a real Chinese-speaking and Chinese-understanding person. (We are to imagine the email exchanges going on as frequently we like, with as many people as we like, as long as we like, even for an entire lifetime. The TT is not just a short-term trick.)
Symbol-Manipulation. In the original version of Searle’s Chinese Room Argument he imagined himself in the Chinese Room, receiving the Chinese emails (a long string of Chinese symbols, completely unintelligible to Searle). He would then consult the TT-passing computer program, in the form of rules written (in English) on the wall of the room, explaining to Searle exactly how he should manipulate the symbols, based on the incoming email, to generate the outgoing email. It is important to understand that computation is just rule-based symbol-manipulation, and that the manipulation and matching is done purely on the basis of the shape of the symbols, not on the basis of their meaning.
Now the gist of Searle’s argument is very simple: In doing all that, he would be doing exactly the same thing any other piece of hardware executing that TT-passing program was doing: rule-fully manipulating the input symbols on the basis of their shapes, and generating output symbols that make sense to a Chinese pen-pal -- the kind of email reply a real pen-pal would send, a pen-pal that had understood the email received, as well as the email sent.
Understanding. But Searle goes on to point out that in executing the program he himself would not be understanding the emails at all! He would just be manipulating meaningless symbols, on the basis of their shapes, according to the rules on the wall. Therefore, because of the hardware-independence of computation, if Searle would not be understanding Chinese under those conditions, neither would any other piece of hardware executing that Chinese TT-passing program. So much for computationalism and the theory that cognition is just computation.
The System Reply. Searle correctly anticipated that his computationalist critics would not be happy with the handwriting on the wall: Their “System Reply” would be that Searle was only part of the TT-passing system. That whereas Searle would not be understanding Chinese under those conditions, the system as a whole would be!
Searle rightly replied that he found it hard to believe that he plus the walls together could constitute a mental state, but, playing the game, he added: Then forget about the walls and the room. Imagine that I have memorized all the symbol manipulation rules and can conduct them from memory. Then the whole system is me: Where’s the understanding?
Desperate computationalists were still ready to argue that somewhere in there, inside Searle, under those conditions, there would lurk a Chinese-understanding of which Searle himself was unaware, as in multiple personality disorder -- but this seems even more far-fetched than the idea that a person plus walls has a joint mental state of which the person is unaware.
Brain Power. So the Chinese Room Argument is right, such as it is, and computationalism is wrong. But if cognition is not just computation, what is it then? Here Searle is not much help, for he first overstates what his argument has shown, concluding that it has shown (i) that cognition is not computation at all – whereas all it has shown is that cognition is not all computation. Searle also concludes that his argument has shown (ii) that the Turing Test is invalid, whereas all it has shown is that the TT would be invalid if it could be passed by a purely computational system. His only positive recommendation is to turn brain-ward, trying to understand the causal powers of the brain instead of the computational powers of computers.
But it is not yet apparent what the relevant causal powers of the brain are, nor how to discover them. The TT itself is a potential guide: Surely the relevant causal power of the brain is its power to pass the TT! We know now (thanks to the Chinese Room Argument) that if a system could pass the TT via computation alone, that would not be enough. What would be missing?
The Robot Reply. One of the attempted refutations of the Chinese Room Argument – the “Robot Reply” – contained the seeds of an answer, but they were sown in the wrong soil. A robot’s sensors and effectors were invoked in order to strengthen the System Reply: It is not Searle plus the walls of the Chinese Room that constitutes the Chinese-understanding “system”, it is Searle plus a robot’s sensors and effectors. Searle rightly points out that it would still be him doing all the computations, and it was the computations that were on trial in the Chinese Room. But perhaps the TT itself needs to be looked at more closely here:
Behavioral Capacity. Turing’s original Test was indeed the email version of the TT. But there is nothing in Turing’s paper or his arguments on behalf the TT to suggest that it should be restricted to candidates that are just computers, or even that it should be restricted to email! The power of the TT is the argument that if the candidate can do everything a real person can do – and do it indistinguishably from the way real people does it, as judged by real people – then it would be mere prejudice to conclude that it lacked mental states when we were told it was a machine. We don’t even really know what a machine is, or isn’t!
But we do know that real people can do a lot more than just email to one another. They can see, touch, name, manipulate and describe most of the things they talk about in their email. Indeed, it is hard to imagine how either a real pen-pal or any designer of a TT-passing computer program could deal intelligibly with all the symbols in an email message without also being able to do at least some of the things we can all do with the objects and events in the world that those symbols stand for.
Sensorimotor Grounding of Symbols. Computation, as noted, is symbol-manipulation, by rules based on the symbols’ shapes, not their meanings. Computation, like language itself, is universal, and perhaps all-powerful (in that it can encode just about anything). But surely if we want the ability to understand the symbols’ meanings to be among the mental states of the TT-passing system, this calls for more than just the symbols and the ability to manipulate them. Some, at least, of those symbols must be “grounded” in something other than just more meaningless symbols and symbol-manipulations – otherwise the system is in the same situation as someone trying to look up the meaning of a word in a language – let’s say, Chinese -- that he does not understand… in a Chinese-Chinese dictionary! Emailing the definitions of the words would be intelligible enough to a pen-pal who already understood Chinese, but they would be of no use to anyone or anything that did not understand Chinese. Some of the symbols must be grounded in the capacity to recognize and manipulate the things in the world that the symbols refer to.
Mind-Reading. So the TT candidate must be a robot, able to interact with the world that the symbols are about -- including us -- directly, not just via email. And it must be able to do so indistinguishably from the way any of the rest of us interact with the world or with one another. That is the gist of the TT. The reason Turing originally formulated his test in its pen-pal form was so that we would not be biased by the candidate’s appearance. But in today’s cinematic sci-fi world we have, if anything, been primed to be over-credulous about robots, so much more “capable” are our familiar fictional on-screen cyborgs than any TT candidate yet designed in a cog-sci lab. In real life our subtle and biologically based “mind-reading” skills (Frith & Frith 1999) will be all we need once cog-sci starts to catch up with sci-fi and we can begin T-Testing in earnest.
The Other-Minds Problem. Could the Chinese Room Argument be resurrected to debunk a TT-passing robot? Certainly not. For Searle’s argument depended crucially on the hardware-independence of computation. That was what allowed Searle to “become” the candidate and then report back to us (truthfully) that we were mistaken if we thought he understood Chinese. But we cannot “become” the TT-passing robot, to check whether it really understands, any more than we can become another person. It is this parity (between other people and other robots) that is at the heart of the TT. And anyone who thinks this is not an exacting enough test of having a mind need only remind himself that the Blind Watchmaker (Darwinian evolution), our “natural designer,” is no more capable of mind-reading than any of the rest of us is. That leaves only the robot to know for sure whether or not it really understands.
Anderson D., & Copeland B.J. (2002) Artificial Life and the Chinese Room Argument Artificial Life 8(4): 371-378
Brown S. (2002) Peirce, Searle, and the Chinese Room Argument. Cybernetics & Human Knowing 9(1) 23-38
Dyer, M. G. (1990) Intentionality and computationalism: minds, machines, Searle and Harnad. Journal of Experimental and Theoretical Artificial Intelligence 2(4): 303-319.
French, R. (2000) The Turing Test: The First Fifty Years. Trends in Cognitive Sciences 4(3): 115-121.
Frith Christopher D. & Frith, Uta (1999) Interacting minds -- a biological basis. Science 286: 1692Ð1695. http://pubpages.unh.edu/~jel/seminar/Frith_mind.pdf
Harnad, Stevan (1989) Minds, Machines and Searle. Journal of Theoretical and Experimental Artificial Intelligence 1: 5-25. http://cogprints.org/1573/
Harnad, Stevan (1990) The Symbol Grounding Problem. Physica D 42:pp. 335-346. http://cogprints.org/3106/
Harnad, S. (2001) What's Wrong and Right About Searle's Chinese Room Argument? In: M. Bishop & J. Preston (eds.) Essays on Searle's Chinese Room Argument. Oxford University Press. http://cogprints.org/4023/
Harnad, Stevan (2001) On Searle On the Failures of Computationalism, Psycoloquy: 12,#61 Symbolism Connectionism (28)
Harnad, Stevan (2003) Can a machine be conscious? How? . Journal of Consciousness Studies 10(4-5):pp. 69-75.
Harnad, Stevan (2003) Symbol-Grounding Problem. Encylopedia of Cognitive Science. Nature Publishing Group. Macmillan. http://cogprints.org/3018/
Harnad, Stevan (2004) The Annotation Game: On Turing (1950) on Computing, Machinery and Intelligence. In: Epstein, Robert & Peters, Grace (Eds.) The Turing Test Sourcebook: Philosophical and Methodological Issues in the Quest for the Thinking Computer. Kluwer. http://cogprints.org/3322/
Overill, R.E. (2004) Views into the Chinese Room: New Essays on Searle and Artificial Intelligence. Journal of Logic and Computation14(2): 325-326
Pylyshyn, Z. W. (1984) Computation and cognition. Cambridge MA: MIT/Bradford
Searle, John. R. (1980) Minds, brains, and programs. Behavioral and Brain Sciences 3 (3): 417-457 http://www.bbsonline.org/documents/a/00/00/04/84/bbs00000484-00/bbs.searle2.html
Searle, John R. (1984) Minds, brains, and science. Cambridge, Mass.: Harvard University Press.
Searle, John. R. (1987) Minds and Brains without Programs," Mindwaves, C. Blakemore & S. Greenfield (eds.), Oxford: Basil Blackwell.
Searle, John R. (1990) Explanatory inversion and cognitive science. Behavioral and Brain Sciences 13: 585-595.
Searle, John. R. (1990) Is the Brain's Mind a Computer Program?", Scientific American, January 1990.
Searle, John R. (2001) The Failures of Computationalism: Psycoloquy: 12,#62 Symbolism Connectionism (29)
Souder, L. (2003) What Are We to Think about Thought Experiments? Argumentation 17(2): 203 - 217
Turing, A. M. (1950) Computing Machinery and Intelligence. Mind 59:pp. 433-460. http://cogprints.org/499/
Wakefield, J.C. (2003) The Chinese Room Argument Reconsidered: Essentialism, Indeterminacy, and Strong AI. Minds and Machines
13(2): 285 - 319