Instituut voor Taal- en Kennistechnologie
Institute for Language Technology and Artificial Intelligence

Harnad responds

Honavar says little that I can disagree with. For me, analog structures and processes are those that are best described as obeying differential equations rather than as implementations of implementation-independent symbol manipulations (or, as Maclennan puts it, symbolic difference equations). The difference between a real planetary system and a computer simulated planetary system captures the distinction quite nicely. It seems to me that the final chapter of quantum mechanics (concerning the ultimate continuity or discreteness of the physical world) has nothing to do with this dichotomy, no matter what it turns out to be.

Whether symbols are grounded by learning or evolution does not much matter to my theory; I happen to focus on learned categories, but the raw input we begin with is clearly already filtered and channeled considerably by evolution. It would be incorrect (and homuncular), however, to speak of a grounded system's ``interpreting'' the shapes of its symbols. If the symbols are grounded, then they are connected to and about what they are about independently of any interpretations we (outsiders) project on them, in virtue of the system's TTT interactions and capacity. But (as Searle points out in his commentary, and I of course agree), there may still be nobody home in the system, no mind, hence no meaning, in which case they would still not really be ``about'' anything at all, just, at best, TTT-connected to certain objects, events and states of affairs in the world. Grounding does not equal meaning, any more than TTT-capacity guarantees mind. And there is always the further possibility that symbol grounding is a red herring, because symbol systems are a red herring, and not much of whatever really underlies mentation is computational at all. The TTT would still survive if this were the case, but ``grounding'' would just reduce to robotic ``embeddedness'' and ``situatedness.''