Institute for Language Technology and Artificial Intelligence

In traditional terminology, analog computers represent variables by continuously-varying quantities, whereas digital computers represent them by discretely-varying quantities (typically, voltages, currents, charges, etc. in both cases). Thus the difference between analog and digital computation lies in a distinction between the continuous and the discrete, but it is not the precise mathematical distinction. What matters is the behavior of the system at the relevant level of analysis. For example, in an analog computer we treat charge as though it varies continuously, although we know it's quantized (electron charges). Conversely, in a digital computer we imagine we have two-state devices, although we know that the state must vary continuously from one extreme state to the other (voltage cannot change discontinuously). The mathematical distinction between discrete and continuous is absolute, but irrelevant to most physical systems.

Many complex systems are discrete at some levels of analysis and continuous at others. The key questions are:

- What level of analysis is relevant to the problem at hand?
- Is the system approximately discrete or approximately continuous (or neither) at that level?

Having considered the differences between analog and digital computers, I'll now consider their similarities, which I think are greater than Harnad admits.

First, both digital and analog computers provide * state
spaces*, which can be used to represent aspects of the problem. In
digital computers the set of states is (approximately) discrete,
e.g., most of the time the devices are in one of two states (i.e., 0
and 1). On the other hand, in analog computers the set of states is
(approximately) continuous, e.g., in going from 0 to 1 it seems to
pass through all intermediate values. In both cases the physical
quantities controlled by the computer (voltages, charges, etc.)
correspond to quantities or qualities in the problem being solved
(e.g., velocities, masses, decisions, colors).

Both digital and analog computers allow the programmer to
control the trajectory of the computer's state through the state
space. In digital computers, * difference equations* describe
how the state changes discretely in time, and programs are just
generalized (numerical or nonnumerical) difference equations
(MacLennan 1989; 1990a, pp. 81, 193). On the other hand, in analog
computers, * differential equations* describe how the state
changes continuously in time. In both cases the actual physical
quantities controlled by the computer are irrelevant; all that
matters are their ``formal'' properties (as expressed in the
difference or differential equations). Therefore, analog
computations are independent of a specific implementation in the
same way as are digital computations. Further, analog
computations can support interpretations in the same way as can
digital computations (a point elaborated upon below).

In the theory of computation we study the properties of * idealized
computational systems*. They are idealized because they make
certain * idealizing assumptions*, which we expect to be only
approximately instantiated in reality. For example, in the
traditional theory of discrete computation, we make such assumptions
as that tokens can be unambiguously separated from the background, and
that they can be unambiguously classified as to type.

The theory of discrete computation has been well developed since the 1930s and forms the basis for contemporary symbolic approaches to cognitive modeling. In contrast, though exploration of continuous computation has been neglected until recently, we expect that continuous computational theory will provide a foundation for connectionist cognitive models (MacLennan 1988, in press-a, in press-b). Although there are many open questions in this theory --- including the proper definition of computability, and of universal computing engines analogous to the Universal Turing Machine --- the general outlines are clear (MacLennan 1987; 1990c; in press-a; in press-b; Wolpert & MacLennan submitted; see also Blum 1989; Blum & al. 1988; Franklin & Garzon 1990; Garzon & Franklin 1989; 1990; Lloyd 1990; Pour-El & Richards 1979; 1981; 1982; Stannett 1990).

In general, a * computational system* is characterized
by:

- a
*formal*part, comprising a state space and processes of transformation; and - an
*interpretation*, which- assigns meaning to the states (thus making them
*representations*), - assigns meaning to the processes, and
- is
*systematic*.

- assigns meaning to the states (thus making them

Whether discrete or continuous computation is a better model for cognition is a significant empirical question. Certainly connectionism shows great promise in this regard, but it leaves open the question of how representations get their meaning. The foregoing shows, I hope, that the continuous/discrete (or analog/digital) computation issue is not essential to the symbol grounding problem. I don't know if Harnad is clear on this; sometimes he seems to agree, sometimes not. What, then, is essential to the problem?

However, I agree with Harnad and Searle that symbols do not get their meanings merely through their formal relations with other symbols, which is in effect the claim of computationalism (analog or digital). In this sense, connectionist computationalism is no better than symbolic computationalism.

There is not space here to describe an alternate approach to
these problems, but I will outline the ideas and refer to other
sources for the details. Harnad argues that there is an
``impenetrable `other-minds' barrier'' (Hayes & al. 1992), and from
a philosophical standpoint that may be true, but from a scientific
standpoint it is not. Psychologists and ethologists routinely
attribute ``understanding'' and other mental states to other
organisms on the basis of external tests. The case of ethology is
especially relevant, since it deal with a range of mental
capabilities, which, it's generally accepted, includes
understanding and consciousness at one extreme (the human), and
their absence at the other (say, the amoeba). Therefore it becomes
a scientific problem to determine whether an animal's response to
a stimulus is an instance of it understanding the * meaning* of
a symbol or merely responding to its physical form (Burghardt
1970; Slater 1980).

Burghardt (1970) solves the problem of attributing meaning
to symbols by defining communication in terms of behavior that
tends to influence receivers in a way that benefits the signaller or
its group. Although it may be difficult in the natural environment
to reduce such a definition to operational terms, the techniques of
* synthetic ethology* allow carefully-controlled experimental
investigation of meaningful symbol use (MacLennan 1990b; 1992;
MacLennan & Burghardt submitted). (For example, we've
demonstrated the evolution of meaningful symbol use from
meaningless symbol manipulation in a population of simple
machines.)

Despite our differences, I agree with Harnad's requirement that meaningful symbols be grounded. Furthermore, representational states (whether discrete or continuous) have sensorimotor grounding, that is, they are grounded through the system's interaction with its world. This makes transduction a central issue in symbol grounding, as Harnad has said.

Information must be materially instantiated --- represented
in a configuration of matter and energy --- if it is to be processed
by an animal or a machine. A * pure transduction* changes the
kind of matter or energy in which information is instantiated.
Conversely, a * pure computation* changes the configuration of
matter and energy --- thus processing the information --- without
changing its material embodiment. We may say that in
transduction the * form* is preserved but * substance* is
changed. In computation, in contrast, the * form* is changed
but the * substance* remains the same. (Most actual
transducers do not do pure transduction, since they change the
form as well as the substance of the information.)

Observe that the issue of transduction has nothing to do with the question of analog vs. digital (continuous vs. discrete) computation; transduction can be either continuous or discrete depending on the kind of information represented. Continuous transducers transfer an image from one space of continuous physical variables to another; examples include the retina and robotic sensor and effector systems. Discrete transducers transfer a configuration from one discrete physical space to another; examples include photosensitive switches, toggle switches, and on/off pilot lights.

Harnad seems to be most interested in continuous-to-discrete transduction, if we interpret his `analog world' to mean the world of physics, which is dominated by continuous variables, and we assume the output of the transducers are discrete symbols. The key point is that the specific material basis (e.g. light energy) for the information ``out there'' is converted to the unspecified material basis of formal computation inside the computer. Notice, however, that this is not pure transduction, since in addition to changing the substance of the information it also changes its form; in particular it must classify the continuous image in order to assign it to one of the discrete symbols, and so we have computation as well as transduction. (We can also have the case of an ``impure'' discrete-to-continuous transduction; an example would be an effector that interpolates between discretely specified states. Impure continuous/continuous and discrete/discrete transducers also occur; an analog filter is an example of the former.)

Harnad's response