Instituut voor Taal- en Kennistechnologie
Institute for Language Technology and Artificial Intelligence

The Failures of Computationalism

John R. Searle

The Power in the Chinese Room

Harnad sees the force of the Chinese Room Argument but is reluctant to carry it through to its logical conclusion. In this commentary I want to follow the argument through to the end.

The Chinese Room shows that a system, me for example, could pass the Turing Test for understanding Chinese and could implement any program you like and still not understand a word of Chinese. Now, why? What does the genuine Chinese speaker have that I in the Chinese room do not have. The answer is obvious. I in the Chinese room am manipulating a bunch of symbols, but The Chinese speaker has more than formal symbols, he knows what they mean. That is, in addition to the syntax of Chinese the genuine Chinese speaker has a semantics in the form of meaning, understanding, and mental contents generally.

But once again, why? Why can't I in the Chinese room also have a semantics? Because all I have is a program and a bunch of symbols, and programs are defined syntactically in terms of the manipulation of symbols. The Chinese room shows what we knew all along: syntax by itself is not sufficient for semantics. (Does anyone actually deny this point, I mean straight out? Is anyone actually willing to say, straight out, that they think that syntax, in the sense of formal symbols, is really the same as semantic content, in the sense of meanings, thought contents, understanding, etc.?)

So far Harnad and I are in agreement. Now let's go the next step. Why did the old time computationalists make such an obvious mistake? Part of the answer is that they were confusing epistemology with ontology, they were confusing ``How do we know?'' with ``What it is that we know when we know?'' This mistake is enshrined in the Turing Test(TT) (and, as we shall see, it is also enshrined in Harnad's total Turing Test(TTT)). They thought that because the system could behave as if it understood Chinese and because there was an implemented program that mediated the input and the output, that somehow that combination, program plus behavior, must be constitutive of cognition. But now, as a consequence of the Chinese Room Argument, we know that that is a mistake. In general we find out that a person can understand Chinese by observing his behavior and if we had an artifical system that behaves the same way as the person, and we had an explanation, such as the program provides, it would seem not unreasonable to conclude that the system also understands Chinese. But we now know that program plus behavior cannot be constitutive of understanding. We know that the program is just symbol manipulation and that where the ontology --- as opposed to the epistemology --- of the mind is concerned behavior is, roughly speaking, irrelevant.

Well then, if a formal program implemented in a brain or a commercial computer is not constitutive of understanding, what is constitutive of understanding, and of cognition in general? If the program is not enough, what is? And if the relation of mind to brain is not that of software to hardware, what is the relation?

In response to such questions there is no substitute for reminding ourselves of what we already know to be the case. We know this : We all really do have mental states and processes, some conscious , some unconscious. These are intrinsic to us in the sense that we really do literally have them, and their existence is not a matter of how other people treat us or think of us. Many of them have intentional contents. Furthermore all of our cognitive processes, without exception are caused by neurobiological processes in the brain, and these processes in turn are often, though not always, triggered by external stimuli. Lower level processes in the brain cause perceptual experiences, the understanding of sentences, logical inferences, and all the rest of it. (Again, does anyone deny this?)

Let us ponder some consequences of this point. Its most immediate consequence is that any system which actually had cognition would have to have internal causal powers equivalent to those of the brain. These causal powers might be achieved in some other medium. Indeed there is no logical impossibility in achieving those powers in some totally a ridiculous medium. --- beer cans, Cartesian souls, even silicon chips --- but in real life we know that the biology of cognition is likely to be as biochemically limited as, say, the biology of digestion. There are lots of actual and possible forms of digestion, but not just anything will work; and similarly with cognition. In any case, whatever the chemistry of an artifical brain, we know that to succeed it must duplicate --- and not a merely simulate or model --- the causal powers of the real brain. (Compare: airplanes do not have to have feathers in order to fly, but they do have to duplicate the causal power of birds to overcome the force of gravity in the earth's atmosphere.)

So in creating an artifical brain we have two problems, first, anything that does the job has to duplicate and not merely simulate the actual causal powers of real brains and second, syntax is not enough to do the job.

Does Harnad's robot that can pass the TTT evade or overcome these problems ? I cannot see that it even touches the problems, and I am puzzled that he thinks it does, because his argument looks like a variant of the robot reply that I answered in my original target article in BBS, (Searle, 1980). Here is how the argument goes:

Harnad argues that a robot that could pass TTT in virtue of sensory and motor transducers would not merely be interpretable as having appropriate mental states but would actually have such mental states. But imagine a really big robot whose brain consists of a commercial computer located in a Chinese room in the robot's cranium. Now replace the commercial computer with me, Searle. I am now the robot's brain carrying out the steps in the robots programs. The robot has all of the sensory and motor transducers it needs to coordinate its imput with its output. And I, Searle, in the Chinese room am doing the coordinating, but I know nothing of this. For example, among the robot's transducers we will suppose, are devices that convert optical stimuli into Chinese symbols. These Chinese symbols are input to the Chinese Room and I operate on these symbols according to a set of rules, the program. I operate on these symbols and eventually send symbols to transducers that cause motor output. The motor output is an utterance that says in Chinese, ``I just saw a big fat Buddha''. But all the same, I didn't see anything, and neither did the robot. That is, there were no conscious visual experiences of a Buddha in question. What actually occurred was that light sensitive detectors in the robot's skull took in stimuli, converted these into symbols, I processed the symbols and sent them to an output transducers that converted the symbolic output into an auditory utterance. In such a case you can have all of the transducers you want and pass the TTT until the sun goes down but you still do not thereby guarantee the appropriate experience, understanding, etc.

Harnad thinks it is an answer to this to suppose that the transducers have to be part of me and not just an appendage. Unless they are part of me, he says, I am ``not implementing the whole system, only part of it.'' Well fine, let us suppose that I am totally blind because of damage to my visual cortex, but my photoreceptor cells work perfectly as transducers. Then let the the robot use my photoreceptors as transducers in the above example. What difference does it make? None at all as far as getting the causal powers of the brain to produce vision. I would still be blindly producing the input output functions of vision without seeing anything.

Will Harnad insist in the face of this point that I am still not implementing the whole system? If he does then the thesis threatens to become trivial. Any system identical with me has cognition because of my neurobiological constitution. Anything identical with my system can cause what my system causes, and all the talk about TTT, computation and the rest of it would now become irrelevant.

The moral can be stated generally. Syntax is not enough to guarantee mental content, and syntax that is the output of transducers is still just syntax. The transducers dont add anything to the syntax which would in any way duplicate the quite specific causal powers of the brain to produce such mental phenomena as conscious visual experiences.

I think the deep difference between Harnad and me comes when he says that in cases where I am not implementing the whole system, then, ``as in the Chinese gym, the System Reply would be correct.'' But he does not tell us how it could possibly be correct. According to the System Reply, though I in the Chinese Room do not understand Chinese or have visual experiences, the whole system understands Chinese, has visual experiences, etc. But the decisive objection to the System Reply is one I made in 1980: If I in the Chinese Room don't have any way to get from the syntax to the semantics then neither does the whole room; and this is because the room hasn't got any additional way of duplicating the specific causal powers of the Chinese brain that I do not have. And what goes for the room goes for the robot.

In order to justify the System Reply one would have to show

And in order to show that one would have to show Until these two conditions are met, the System Reply is just hand waving.

The moral is that the mistake of the TTT is exactly the same as the mistake of the TT: it invites us to confuse epistemology with ontology. Just as a system can pass the Turing Test without having any appropriate cognition, so a system can pass the Total Turing Test and still not have the appropriate cognition. Behavior plus syntax is not constitutive of cognition, and for the same reason transduction plus syntax is not constitutive of cognition. To repeat, where the ontology --- as opposed to the epistemology --- of the mind is concerned, behavior is irrelevant.

Connectionism to the Rescue?

Well what about connectionism? Will connectionism solve our problems? It all depends on which features of which nets are under discussion and what claims are being made. If the claim is that we can simulate, though not duplicate, some interesting properties of brains on connectionist nets, then there could be no Chinese room style of objections. Such a claim would be a connectionist version of weak AI. But what about a connectionist Strong AI? Can you build a net that actually had, and did not merely simulate, cognition?

This is not the place for a full discussion, but briefly: If you build a net that is molecule for molecule indistinguishable from the net in my skull, then you will have duplicated and not merely simulated a human brain. But if a net is identified purely in terms of its computational properties then we know from familiar results that any such properties can be duplicated by a Universal Turing machine. And Strong AI claims for such computations would be subject to Chinese Room style refutation.

For purposes of the present discussion, the crucial question is: In virtue of what does the notion ``same connectionist net'' identify an equivalence class. If it is in virtue of computational properties alone then a Strong AI version of connectionism is still subject to the Chinese Room Argument, as Harnad's example of the three rooms illustrates nicely. But if the equivalence class is identified in terms of some electro chemical features of physical architectures, then it becomes an empirical question, one for neurobiology to settle, whether the specific architectural features are such as to duplicate and not merely simulate actual causal powers of actual human brains. But, of course, at present we are a long way from having any nets where such questions could even be in the realm of possibility. The characteristic mistake in the literature --- at least such literature as I am familiar with --- is to insinuate that the connectionist style of computation will somehow will get us close to the causal powers of actual systems of neurons. For example, the computations are done in systems that are massively parallel and so operate at several different physical locations simultaneously. The computation is distributed over the whole net and is achieved by summing input signals at nodes according to connection strengths, etc. Now will these and other such neuronally inspired features give us an equivalence class that duplicates the causal powers of actual human neuronal systems? As a claim in neurobiology the idea seems quite out of the question, as you can see if you imagine the same net implemented in the Chinese Gym. Unlike the human brain, there is nothing in the gym that could either constitute or cause cognition.

Harnad, by the way, misses the point of the Chinese gym. He thinks it is supposed to answer the systems reply. But that is not the point at all.

The dilemma for Strong AI Connectionism can be stated succinctly: If we define the nets in terms of their computational properties, they are subject to the usual objection. Computation is defined syntactically and syntax by itself is not sufficient for mental contents. If we define the nets in terms of physical features of their architecture then we have left the realm of computation and are now doing speculative neurobiology. Existing nets are nowhere near to having the causally relevant neurobiological properties. (Again, does anyone really doubt this?)


Harnad's response

Harnad's target article

Table of contents