Instituut voor Taal- en Kennistechnologie
Institute for Language Technology and Artificial Intelligence

Computationalism, Neural Networks and Minds, Analog or Otherwise

Michael G. Dyer

A working hypothesis of computationalism is that Mind arises, not from the intrinsic nature of the causal properties of particular forms of matter, but from the organization of matter. If this hypothesis is correct, then a wide range of physical systems (e.g. optical, chemical, various hybrids, etc.) should support Mind, especially computers, since they have the capability to create/manipulate organizations of bits of arbitrarily complexity and dynamics. In any particular computer, these bit patterns are quite physical, but their particular physicality is considered irrelevant (since they could be replaced by other physical substrata).

When an organizational correspondence is set up between patterns in a computer and patterns in some other physical system, we tend to call the computer patterns ``symbols''. The correspondence, however, is usually only to some level of organization. In traditional Artificial Intelligence (AI), a small number of symbols may correspond, for instance, to a entire proposition. In Connectionist Modeling (CM), a symbol more commonly will correspond to a single neuron (or perhaps just a single chunk of a neurotransmitter within a neuron). Thus, the major issues that distinguish AI from CM are more involved with what appropriate levels of granularity capture essential organizational dynamics, rather than with any (purported) abandonment of computationalism within the CM paradigm (Dyer, 1991). To my knowledge, both paradigms are strongly committed to computationalism.

In analog systems, however, physicality is central to organizational dynamics. For instance, to find the minimal energy state of water flowing downhill, we simply set up the terrain in a gravitational field, add water, and then let nature ``be itself''. But is there some extra capability that an analog system (A1) has over its organizational counterpart (C1) on the computer?

Clearly, A1 remains physically distinct from C1 -- e.g. simulating the organization of water molecules in a computer will never make the computer physically wet. But what about the organizational capabilities of A1 with respect to C1? Is there some organizational behavior that A1 is capable of but C1 is not? The answer to this question depends on the level at which the organizational correspondence has been established.

If nature is fundamentally discrete (as is the current view of quantum physics), then each symbol could conceivably correspond to some smallest (indivisible) unit of matter-energy or space-time. Thus, C1 modeled at a quantum level would have all the organizational properties of A1 (still without exhibiting any of A1's physicality). Hopefully, this extremely detailed level of organization is not needed to exhibit Mind.

If Minds do not arise solely from the organization of matter (but require specific forms of physicality) then both Harnad and Searle are right -- no computer could ever have Mind just by virtue of its organization. But are there any persuasive arguments for needing some particular physical substratum?

Searle's ``Chinese Room'' argument is unpersuasive because there should be no expectation that Searle, in acting as an interpreter (whether at AI, CM or more detailed levels of organization), would understand Chinese. When we implement a natural language understanding (NLP) system ``on top of'', say, a Lisp (or Prolog) interpreter, we do not expect that interpreter to understand what the NLP system understands. Thus, Searle's lack of Chinese understanding should come as no surprise (Dyer, 1990a,b).

Harnad's ``Transducer'' argument is that physical transducers are required for Mind (with analog ones apparently now being Harnad's best candidates). Harnad's argument suffers from the ``Out of Sight, Out of Mind'' problem. That is, if we build a Mind-like system (for instance, able to read and understand Harnad's position paper and this commentary) and disconnect it's eyes (and any other sensors/effectors), the system (according to Harnad) would lose its Mind. Harnad's argument also suffers from the ``Virtual Reality'' rebuttal, in which we hook up a Mind-like system M to a Virtual Reality system. M is grounded in a sensory reality, but since that entire reality is computer generated, no physical transducers (only simulated ones) are needed (Dyer, 1990a,b).

Where does this leave us? Without definitive arguments for the need for special forms of physicality, we are left with both sides essentially arguing over the definition of Mind. The Computationalists define Mind in terms of Mind-like behavior, resulting from the organization of matter at some level of granularity (usually enough to pass either the TT or TTT). The Physicalists simply define Mind as requiring some extra (as yet unexplained) physicality (analog or otherwise). But until some convincing pro-physicality arguments come along, our best strategy should be to judge potential minds in terms of their Mind-like capabilities and behaviors, not their physical substrata.


Harnad's response

Harnad's target article

Next article

Table of contents