Instituut voor Taal- en Kennistechnologie
Institute for Language Technology and Artificial Intelligence

Special section on `Symbolism versus Connectionism'

David M.W. Powers and Peter A. Flach, guest-editors

There is a continuing debate in the AI community about the relative merits of symbolic learning and connectionist learning. While working on the THINK special section on Machine Learning (THINK, Vol. 1, No. 2, 1992), we wanted to include contributions representing several, alternative viewpoints on this subject. In order to retain some of the liveliness of an actual debate, we invited a number of authors to contribute by means of position papers and commentaries, rather than isolated technical papers. Pleasantly surprised by the enthusiastic response to this `Call for debate' (1 position paper and 13 commentaries), and by the broad scope of the issues raised, we decided to devote an entire special section entitled `Symbolism versus Connectionism' to this discussion.

We are delighted to introduce Stevan Harnad, the author of the position paper `Grounding symbols in the analog world with neural nets: a hybrid model'. As the founding editor of the journal Behavioral and Brain Sciences, from which the idea of position paper plus commentaries was borrowed, Stevan gave us many useful suggestions on how to organise things. Harnad's main argument can be roughly summarised as follows: due to Searle's Chinese Room argument, symbol systems by themselves are insufficient to exhibit cognition, because the symbols are not grounded in the real world, hence without meaning. However, a symbol system that is connected to the real world through transducers receiving sensory data, with neural nets translating these data into sensory categories, would not be subject to the Chinese Room argument.

Harnad's article is not only the starting point for the present debate, but is also a contribution to a longlasting discussion about such questions as: Can a computer think? If yes, would this be solely by virtue of its program? Is the Turing Test appropriate for deciding whether a computer thinks? Since about half of the commentaries (Bringsjord, Dietrich, Fetzer, Hayes, McDermott, Searle) should also be read in the context of this ongoing discussion, we will summarise the main points made so far before letting the authors speak for themselves.

The Chinese Room argument originates from John Searle, and we can do no better than quoting him:

``Imagine that a bunch of computer programmers have written a program that will enable a computer to simulate the understanding of Chinese. So, for example, if the computer is given a question in Chinese, it will match the question against its memory, or data base, and produce appropriate answers to the questions in Chinese. Suppose for the sake of argument that the computer's answers are as good as those of a native Chinese speaker. Now then, does the computer, on the basis of this, understand Chinese (...)? Well, imagine that you are locked in a room, and in this room are several baskets full of Chinese symbols. Imagine that you (like me) do not understand a word of Chinese, but that you are given a rule book in English for manipulating these Chinese symbols. The rules specify the manipulations of the symbols purely formally, in terms of their syntax, not their semantics. So the rule might say: `Take a squiggle-squiggle sign out of basket number one and put it next to a squoggle-squoggle sign from basket number two.' (...) By virtue of implementing a formal computer program from the point of view of an outside observer, you behave exactly as if you understood Chinese, but all the same you don't understand a word of Chinese.'' (Searle, 1984, pp.32--33)
Since the Chinese Room argument is often misunderstood, it is important to state explicitly what Searle claims here.

``First, I have not tried to prove that `a computer cannot think'. Since anything that can be simulated computationally can be described as a computer, and since our brains can at some levels be simulated, it follows trivially that our brains are computers and they can certainly think. But from the fact that a system can be simulated by symbol manipulation and the fact that it is thinking, it does not follow that thinking is equivalent to formal symbol manipulation. Second, I have not tried to show that only biologically based systems like our brains can think. Right now, those are the only systems we know for a fact can think, but we might find other systems in the universe that can produce conscious thoughts, and we might even come to be able to create thinking systems artificially. I regard this issue as up for grabs. Third, strong AI's thesis is not that, for all we know, computers with the right programs might be thinking, that they might have some as yet undetected psychological properties; rather it is that they must be thinking because that is all there is to thinking. Fourth, (...) I have tried to demonstrate that the program by itself is not constitutive of thinking because the program is purely a matter of formal symbol manipulation --- and we know independently that symbol manipulations by themselves are not sufficient to guarantee the presence of meaning. That is the principle on which the Chinese room argument works.'' (Searle, 1990, pp.21--22)
In effect, Searle wants to demonstrate that the presence or absence of cognitive phenomena cannot be judged solely on the basis of observed input-output behaviour. It is obvious, therefore, that he also rejects the Turing Test as a test for intelligence. In his article, Harnad proposed a strengthened version of the Turing Test, called the Total Turing Test, which he claims is immune to the Chinese Room argument.

Note that the Chinese Room argument talks about `understanding' without defining it. The underlying assumption is: if there is any understanding going on, it must be experienced by me in the Chinese Room --- but I don't experience any understanding, therefore it is not there. This assumption is rejected by some people, who claim that even if the person in the room does not understand Chinese, the whole system, consisting of the person, the rules, the Chinese symbols and so on, does understand Chinese (this is known as the Systems reply).

Now, what if a computer system consists of many interconnected processors, like a neural network? Is it subject to a similar thought experiment? Searle's answer is yes:

``Imagine that instead of a Chinese room, I have a Chinese gym: a hall containing many monolingual, English-speaking men. These men would carry out the same operations as the nodes and synapses in a connectionist architecture (...), and the outcome would be the same as having one man manipulate symbols according to a rule book. No one in the gym speaks a word of Chinese, and there is no way for the system as a whole to learn the meanings of any Chinese words.'' (Searle, 1984, p.22)
Many people, including some who accept the Chinese Room argument, think the Chinese Gym argument is much less convincing. For instance, in his article Harnad refers to it as ``mere hand-waving (...) --- there is in general no way (...) of confirming or disconfirming that the system does or does not have a mind except by being the system''.

So the questions remain: Are connectionist networks essentially different from symbol systems? Could intelligence be an `emergent property' of an appropriately set-up neural network? Can we agree on a decisive test for intelligence? On the following pages, you will find a number of sometimes provocative views on these issues. We will be happy to receive any comments you might have, so that we can continue the debate in future issues of THINK, and the whole discussion gets something of a connectionist architecture!


Harnad's target article

Table of contents