Re: Dennett: Making a Conscious Robot

From: HARNAD, Stevan (harnad@coglit.ecs.soton.ac.uk)
Date: Mon May 01 2000 - 21:47:18 BST


Discussion archived at:

http://www.cogsci.soton.ac.uk/~harnad/Hypermail/Foundations.Cognitive.Science2000/

On Mon, 1 May 2000, Egerland, Matthias wrote:

> Egerland:
> Dennett points out that the goal is set pretty low, because the project is not
> about creating a machine that would be able to pass a T2 - or even higher -
> Turing test.

Actually, Cog would be "t3" (i.e., a toy fragment or subset of T3).

> > DENNETT:
> > It is unlikely, in my opinion, that anyone will ever make a robot that is
> > conscious in just the way we human beings are.

What does "conscious in just the way we human beings are" mean? A turtle
is not conscious the way we are, but it is conscious. It feels something
when you pinch it. That's what we want in a conscious robot.

The trouble is that talking too liberally about different "forms" of
consciousness risks admitting one form we definitely don't want to
admit, namely, no consciousness at all, "nobody home," a "Zombie."

There ARE Zombies, by the way: A teapot is a Zombie; so is a toaster,
and a furnace, and a thermostat, and a plane.

And a computer.

So what about a (t3 < T3) robot?

> > DENNETT:
> > it is conceivable [...] that the
> > sheer speed and compactness of biochemically engineered processes in the
> > brain are in fact unreproducible in other physical media (Dennett, 1987). So
> > there might be straightforward reasons of engineering that showed that any
> > robot that could not make use of organic tissues of one sort or another
> > within its fabric would be too ungainly to execute some task critical for
> > consciousness.

This is a definite possibility. But notice that it is already covered by
T3, for then a robot not made out of the "right stuff" would be unable
to pass T3.

> > DENNETT:
> > But if somebody were to invent some sort
> > of cheap artificial neural network fabric that could usefully be spliced
> > into various tight corners in a robot's control system, the embarrassing
> > fact that this fabric was made of organic molecules would not and should
> > not dissuade serious roboticists from using it [...].

That would be fine. But remember that the goal of Cognitive Science
though (perhaps not of AI) is to "reverse-engineer" the mind. AI may be
satisfied only with building clever devices, so they can do useful
things for us; but if those devices are produced in ways that we don't
understand, they may still be useful to us, but they will not give us
an understanding of the way the mind works, because we won't understand
the way THEY work!

So, yes, bio-modules might be components in a T3 robot -- but they
better be components that still allow us to understand the robot's
overall (T3) function and the basis for its T3 success. Otherwise such
robots will be more like clones than explanations.

> > DENNETT:
> > Making a fully-equipped conscious adult robot
> > might just be too much work. It might be vastly easier to make an initially
> > unconscious or nonconscious "infant" robot and let it "grow up" into
> > consciousness, more or less the way we all do.

This came up several times in class, remember?

It might be easier (and more informative) to design a robot that
develops and learns, like a human, than to design one that starts out
with the full T3 capacities of an adult.

And of course part of T3 itself is the ability to learn.

But neither its historical antecedents in development and learning (or
evolution) are essential to T3-power NOW. And it is T3-power now that we
are aiming for. (Real-time history is not an essential part of it.)

> DENNETT:
> Steven Spielberg's film, Schindler's List: [some claim] it could
> not have been created entirely by computer animation, without the filming of
> real live actors. This impossibility claim must be false "in principle"

Valid point. But scripts can be just squiggles and squoggles. Passing T3
can't be done by anything like a script; not even T2 can be (unless the
script consists of all possible pen-pal exchanges of length N, something
that explodes combinatorially and would probably require more symbols
than there are electrons in the universe).

And passing T3 is no more like animating a specific, finite film than it
is like puppeteering: T3 must be autonomous, its inner "scripts" must be
prepared for any T3 eventuality (at least any that any of us are
prepared for).

> > DENNETT:
> > We consist of billions of cells, and a single human cell contains
> > within itself complex "machinery" that is still well beyond the artifactual
> > powers of engineers. We are composed of thousands of different kinds of
> > cells, including thousands of different species of symbiont visitors, some
> > of whom might be as important to our consciousness as others are to our
> > ability to digest our food! If all that complexity were needed for
> > consciousness to exist, then the task of making a single conscious robot
> > would dwarf the entire scientific and engineering resources of the planet
> > for millennia. And who would pay for it?

This is similar to the earlier points. T3-power is still the critical
test; and although only optimized bio-modules might be capable of
T3-power, we still have to reverse-engineer the functional basis of
their success, if we are to explain, and not merely duplicate it.

> Egerland:
> Interestingly Dennett does not bother with the theoretical question how to
> create 100% consciousness artificially.

That is true; and there is a reason for that. Can anyone say what it
is?

> > DENNETT:
> > part of the hard-wiring that must be provided in advance is an "innate"
> > if rudimentary "pain" or "alarm" system to serve roughly the same protective
> > functions as the reflex eye-blink and pain-avoidance systems hard-wired into
> > human infants.

Does anyone notice a bit of cheating here? (Not the importing of
unexplained biological bits this time, but the importing of something
else, unexplained?)

(Exercise: What is the difference between a functional system capable of
learning to avoid structural damage to itself -- functioning as-if it
were in pain-- and a system capable of feeling pain?)

Remember that this question is only valid at (t3 < T3) scale.

> Egerland:
> So, even if Cog can not 'feel' real pain, at least it has to be able to
> interpret the signals from its sensors situation dependently in different
> ways. And by the way: Even we humans sometimes have difficulties to
> interpret the 'input' in an appropriate way.

Notice that the tendency to cheat is contagious!

For if it doesn't really FEEL (but merely acts AS-IF it were feeling),
then what makes you think it really INTERPRETS (anything), rather than
merely acting AS-IF...?

Of course, I would only raise this methodological point for (t3 < T3)
robots. Once T3 scale is attained, all such worries are neutralized
(for the very same reasons they are neutralized with you and me).

> Egerland:
> unlike in pure natural development Cog is always
> supervised by scientists, who can correct errors and teach it how to behave.
> Unfortunately, in my opinion this could be a restriction as well, because our
> intelligence is also limited and we make lots of mistakes. Maybe by our
> supervision we prevent Cog from getting to a degree of intelligence we have
> never thought of.

This is only a "restriction" if our scale is (t3 < T3), for, by
definition, full T3 can do everything we can do.

(Who designed Cog-T3, or how, is irrelevant, if Cog-T3 has full T3 power.
Those are irrelevant historical details.

Besides, it is mostly the world that corrects our errors -- and that is
part of T3 learning power too. It is of course cheating -- and merely
puppeteering rather than robotics -- to have the designers doing any of
the work that is supposed to be going on autonomously inside Cog-T3.)

> > DENNETT:
> > [designers need] somehow to give Cog a
> > motivational structure that can be at least dimly recognized, responded to,
> > and exploited by naive observers. In short, Cog should be as human as
> > possible in its wants and fears, likes and dislikes.

Does anyone detect any potential cheating here (if we are at t3 < T3)?
Is there not still that little problem of distinguishing real
(conscious) motivation, wants, fears, likes -- from AS-IF motivation,
etc.?

Again, at T3-scale, this risk vanishes (or shrinks to the same size as
it is with any of the rest of us).

> > DENNETT:
> > It is important to recognize that [...] having a body has been appreciated
> > [n]ot [...] because genuine embodiment provides some special vital
> > juice that mere virtual-world simulations cannot secrete, but for the more
> > practical reason [...] that unless you saddle yourself with all the
> > problems of making a concrete agent take care of itself in the real world,
> > you will tend to overlook, underestimate, or misconstrue the deepest
> > problems of design.

Good point, but something slips by too:

Yes, the real-world (and a real-body in the real-world) are their
own best models: Trying to design a T3-Cog through simulation alone
would require too much second-guessing of all the possibilities that
Cog would have to encounter and be able to manage, in order to pass T3.

So that's a good case for doing embodied rather than virtual robotics to
pass T3.

But there is still the matter of the virtual-Cog-T3 simulation in its
simulated virtual-world: For, based on what we discussed (about the
symbol-grounding problem), that would all still just be squiggles and
squoggles -- just as a virtual airplane simulation in its simulated
virtual-world would just be squiggles and squoggles.

Squiggles and sqoggles systematically interpretable AS-IF they were a
conscious robot and a flying plane, respectively. But to actually get a
robot conscious and a plane flying, you would still have to EMBODY them
-- as a real T3-Cog and a real airplane, respectively.

(Not to disparage such squiggles, though, for, after all, they would be
the full functional blueprint for successfully building a T3-worthy Cog
and an airworthy plane; no mean symbolic feat.)

> > DENNETT:
> > A recent criticism of "strong AI" that has received quite a bit of attention
> > is the so-called problem of "symbol grounding" (Harnad, 1990). It is all
> > very well for large AI programs to have data structures that purport to
> > refer to Chicago, milk, or the person to whom I am now talking, but such
> > imaginary reference is not the same as real reference, according to this
> > line of criticism. These internal "symbols" are not properly "grounded" in
> > the world, and the problems thereby eschewed by pure, non- robotic, AI are
> > not trivial or peripheral.

I have to point out here that Matthias left out the (relevant and
correct) point Dennett goes on to make here, which is that, unlike
(T2) virtual-penpals, (T3) robots are immune to symbol-grounding problem
(and Searle's Chinese Room Argument).

But they do have to be embodied. And they do have to pass T3 (not just
t3).

HARNAD, Stevan



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:28 GMT