Harnad, S. (1993) Grounding Symbols in the Analog World with Neural Nets. Think 2: 12-78 (Special Issue on "Connectionism versus Symbolism" D.M.W. Powers & P.A. Flach, eds.). Pp. 21-22.


Transduction and Degree of Grounding

C. Franklin Boyle
CDEC, 3028 Hamburg Hall
Carnegie Mellon University
Pittsburgh, PA 15213
fb0m@andrew.cmu.edu

While I agree in general with Stevan Harnad's symbol grounding proposal, I do not believe "transduction" (or "analog process") PER SE is useful in distinguishing between what might best be described as different "degrees" of grounding and, hence, for determining whether a particular system might be capable of cognition. By 'degrees of grounding' I mean whether the effects of grounding go "all the way through" or not. Why is transduction limited in this regard? Because transduction is a physical process which does not speak to the issue of representation, and, therefore, does not explain HOW the informational aspects of signals impinging on sensory surfaces become embodied as symbols or HOW those symbols subsequently cause behavior, both of which, I believe, are important to grounding and to a system's cognitive capacity. Immunity to Searle's Chinese Room (CR) argument does not ensure that a particular system is cognitive, and whether or not a particular degree of groundedness enables a system to pass the Total Turing Test (TTT) may never be determined.

It is clear that transduction is necessary to realize robotic capacity and for grounding, as Harnad emphasizes. But how would the symbols in Harnad's "hybrid analog/symbolic robot" be any less arbitrary than symbols as BITMAPS of the objects and categories to which they refer -- perhaps projected onto a digital medium (e.g., a laser disk) through a camera -- in a "core" symbol manipulation system? Though Harnad would most likely consider this arrangement of computer and camera a "computational core-in-a-vat" (Section 6.4) system, digitizing the camera's analog input is certainly an example of "processing analog sensory input" (Section 2.4). Thus, not only would such a system be immune to Searle's CR argument, but bitmaps are NONARBITRARY in relation to what they are bitmaps of, especially if they are produced directly from analog images. Could such a system pass the TTT? With efferent decoders and transducers, there is no A PRIORI reason to suppose it cannot; that is, no reason to assume that its symbols would not "cohere systematically with its robotic transactions with the objects, events and states of affairs that its symbols are interpretable as being about" (Section 7.1). Yet, intuitively, such a system could hardly be said to lay claim to mentality any more than the CR system. And based on transduction alone, neither could Harnad's robot, because transduction does not distinguish the degree to which symbols in his system and the computer-plus-camera are grounded.

What would determine the degree of grounding is HOW symbols are causal. For example, even though the computer-plus-camera system is grounded, and the CR is not, these two systems are physically similar with respect to how their symbols cause change. In other words, the symbols in the computer-plus-camera system are not grounded "all the way through"; once these "bitmap symbols" are input to the computer via the camera, the two systems become identical insofar as the forms of the symbols in both are arbitrary with respect to how they cause change. Why? Because in digital computers, symbols cause change through the physical process of pattern matching (Boyle, 1990; Boyle, 1991; Boyle 1992); like pieces in a puzzle, they "fit" (match) the left-hand sides (matchers) of rules whose right-hand side actions are subsequently triggered, so that as long as there is a fit and the matcher is associated with the appropriate action -- that is, an action which conforms with a systematic interpretation of the symbols -- it does not matter what the symbols look like (e.g., whether they are bitmaps or propositional representations of their referents). Furthermore, it does not matter through what physical processes -- what sorts of encodings or transductions -- the symbols originated. Thus, if Harnad's "symbols and symbolic activity" function through pattern matching, then regardless of how they are CONNECTED to the sensory projections of the objects to which they refer (e.g., by connectionist networks, pointers, etc.), his robot would be little more than a computational system with a PARTICULAR peripheral transduction mechanism, even though he sees the presence of a "second constraint, that of the nonarbitrary 'shape'of the sensory invariants that connect the symbol to the analog sensory projection of the object to which it refers" (Section 7.5) as a significant difference.

But how could this "second constraint" really be a constraint if it does not affect how the symbols cause change (it cannot be a constraint just because a connection exists between analog forms and symbols)? If the symbols effect change through pattern matching (PM), then the nonarbitrary shape of the sensory invariants is superfluous and Harnad's robot would be cognitively equivalent to the computer-plus-camera system. Therefore, the only way for the nonarbitrary shape of the sensory invariants to affect behavior is through a causal mechanism other than PM, one which would presumably ground the system "all the way through". Harnad actually alludes to this mechanism when he notes that discrimination could be accomplished by "SUPERIMPOSING analog projections of objects" (Section 7.2; my italics) and that category structures could be generated through "analog reduction" (Harnad, 1990). Such processes involve what I call "structure-preserving superposition" or SPS (Boyle, 1991; Boyle, 1992), which is fundamentally different than PM. Thus, if Harnad wants to distinguish his robot from so-called computational core-in-a-vat systems, he should consider the category structures themselves as symbols and have them effect change via SPS, which would ground the system "all the way through". Obviously, SPS is an analog process, but, more importantly, it is a causal mechanism that enables symbols to be causal according to what they represent in a physically principled way.

References

Boyle, C.F. (1990) Informing: the Basis of a Solution to the Mind-Body Problem. Proceedings of the 34th Annual Conference of the International Society for the Systems Sciences, (Portland, OR), 1190-1200.

Boyle, C.F. (1991) On the Physical Limitations of Pattern Matching. Journal of Experimental and Theoretical Artificial Intelligence, 3 (2):191-218.

Boyle, C.F. (1992) Projected Meaning, Grounded Meaning and Intrinsic Meaning. Proceedings of the 14th Annual Meeting of the Cognitive Science Society. (New Jersey: Lawrence Erlbaum).

Harnad, S. (1990) The Symbol Grounding Problem, Physica D, 42: 335-346.

HARNAD RESPONSE TO BOYLE

Boyle's is a friendly commentary, so there is no point dwelling on the minor differences there are between us: A system that can pass the TTT is good enough for me. As a matter of logic, transduction will have to be part of its successful functioning. How much of its transducer activity will remain analog, how much will be discretized and filtered, how much will be processed syntactically (by "pattern-matching") -- these are all empirical questions about how that future system will actually succeed in passing the TTT. I happen to have my own hypotheses (neural nets filtering out learned invariants in the analog projection, connecting them to arbitrary symbolic names, which are then manipulated compositionally, but inheriting the nonarbitrary constraint of the grounding) and Boyle may have his. The point, however, is that just as there are no a priori degrees of passing the TTT (that's what the "Total" ensures), there are no a priori degrees of grounding (at least not in the sense I use the word). Ungrounded symbols mean what they mean only because (within their formal system) they can be systematically interpreted as meaning what they mean. In contrast, the meanings of grounded symbol systems are grounded in the system's capacity for robotic interactions with what the symbols are about.

Neither immunity to Searle's Chinese Room Argument nor TTT-groundedness can guarantee that there's somebody home in such a robot, but I happen to think they're the best we can ever hope to do, methodologically speaking. If Boyle's "structure preserving superposition" can do a better job, all power to it. But at this point, it seems to amount to what Searle would call "speculative neurophysiology," whereas transduction and TTT-power have face validity.
-- S.H.