On Sat, 27 May 2000 Grady, James writes:
> >Does it mean that a computer running the right
> >programme will have a mind?
> >Does a plane simulator that simulates its causal
> >structure have the causal power to fly?
> Just supposing we discover an implementation
> Independent algorithm which we used in creating
> T3 robot. It would be the computational part of
> our hybrid system. Could we not implement
> this algorithm into a virtual world. It would
> obviously not pass T3 but if it could be
> communicated with, it might pass T2.
> Do you think that the algorithm in the virtual
> world would be any less of a mind?
> (Supposing the virtual world was sufficiently
> complex to allow grounding of the fundamental
A virtual T3 robot in a virtual world is straight back to T2! Just a
bunch of syetmatically-intrepretable squiggles and squoggles.
It's important for you to understand this difference: that a bunch of
squiggles (no matter how it is interpretable -- as cookbook recipes,
payroll checks, a novel, a sci-fi story, a robot's adventures, or
letters from a pen-pal) is still just an ungrounded bunch of squiggles.
Nobody home in there. No mind.
Distinguish that from the fact that (based on the Church-Turing Thesis
that just about everything can be simulated computationally), squiggles
(symbol systems), can in principle give us the full recipe for building
anything -- a plane that flies, a robot that passes the T3, the
universe and all its laws and boundary conditions.
As such, a symbol system, whether static on a page, or dynamically
implemented on a computer, can provide the BLUEPRINT for building a
flying plane, a thinking robot and an expanding universe -- but the
squiggle system itself cannot fly, think or expand -- it can just
> If so, would not a virtual environment be a
> better place to keep an artificial mind?
No better a place than to keep a plane! A virtual plane in a virtual
world will certainly never fly to real Chicago; and "virtual Chicago" is
just -- you guessed it: squiggle-squoggle.
We're not talking about where to keep the blueprint for building
something that can pass T3, but about actually building something that
can pass T3... (Besides, the proof of the pudding is in the eating: how
can we know that our virtual world captures all the relevant bits of the
> Further, another question about the same topic
> Why is T3 immune to Searle's Chinese Room argument?
> Surely the computational part of the T3 hybrid system
> can be isolated as an Implementation Independent
> algorithm. The peripheral systems would ground the
> percepts into symbols which would be passed to
> Searle in his Chinese Room (Searle would perhaps have
> his whole family with him and would be dealing with
> lots of different I/Os all at the same time). In this way T3
> still seems to be an algorithm which is subject to Searle's
> ps i think i asked the same question twice!
Yes, you asked the same question twice, and the answer is the same. The
only system that is vulnerable to the argument that it's all just
squiggles is an imlementation-independent squiggle system (T2). Making
the system able to interact with the real world of objects, events,
states and features (that world that its squiggles are interpretable as
being ABOUT) -- interact autonomously and directly, with its
sensorimotor systems directly, without the indirect mediation of an
intrepreter's mind, interpreting the squiggles -- grounds the system
and makes it immune to Searle's demonstration that it's all just
squiggles and our interpretations of them. For a grounded robot is not
implementation-independent: Hence Searle cannot "reconfigure" as a
complete implementation of the T3 robot, as he can with the T2 pan-pal,
merely by executing the latter's algorithms (squiggling).
This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:29 GMT