Re: Dennett: Making Conscious Robots

From: Henderson Ian (irh196@ecs.soton.ac.uk)
Date: Thu May 24 2001 - 12:55:05 BST


In reply to: Cove Stuart: "Dennett: Making Conscious Robots"

>> DENNETT:
>> It has limit switches, heat sensors, current sensors, strain gauges and
>> alarm signals in all the right places to prevent it from destroying its
>> many motors and joints. It has enormous "funny bones"--motors sticking
>> out from its elbows in a risky way. These will be protected from harm
>> not by being shielded in heavy armor, but by being equipped with patches
>> of exquisitely sensitive piezo-electric membrane "skin" which will
>> trigger alarms when they make contact with anything. The goal is that
>> Cog will quickly "learn" to keep its funny bones from being bumped--if
>> Cog cannot learn this in short order, it will have to have this
>> high-priority policy hard-wired in.

> Cove:
> All this sounds really impressive, but will the robot
> actually feel pain? When the alarms sound, will the robot be in
> agony or just appear to be? If it isn't in the case in this toy
> example (which I think it is not), would it be if we scaled up
> the example, integrating it with others that all appear
> functionally indistiguishable from our own corresponding
> attributes?

Henderson:
If COG displayed the symptoms of being in pain, we would have no basis for
saying that it wasn't 'actually' in pain. We only 'know' other human beings
or animals are in pain from their response to certain stimuli (if you stick
a needle into a baby, it cries). Pain is just the brain's interpretation of
a particular sensation; indeed the same sensation may elicit different
responses under different circumstances: for some people, spanking seems to
be a source of erotic pleasure, whereas for others it appears to result in
pain. COG undoubtedly has the ability to feel what it touches -- it is
equipped with sensitive membranes on its fingertips, and these send signals
to its processing units where they are interpreted. If COG is programmed
with an innate disposition towards recoiling from objects that damage the
sensors on its fingertips, what grounds do you have to claim that it is not
indeed feeling pain? We too are are genetically 'programmed' to protect our
tissues by ceasing any activity that is damaging to them. Furthermore, like
human beings, COG may respond differently to pain causing stimuli in
different circumstances, for instance it may 'endure' pain during the
process of fulfilling a goal in cases where COG considers the goal 'worth'
the pain.

>> DENNETT:
>> How plausible is the hope that Cog can retrace the steps of millions of
>> years of evolution in a few months or years of laboratory
>> exploration?... The acquired design innovations of Cog-I can be
>> immediately transferred to Cog-II, a speed-up of evolution of
>> tremendous, if incalculable, magnitude. Moreover, if you bear in mind
>> that, unlike the natural case, there will be a team of overseers ready
>> to make patches whenever obvious shortcomings reveal themselves, and to
>> jog the systems out of ruts whenever they enter them, it is not so
>> outrageous a hope, in our opinion.

> Cove:
> Are they really emulating evolution? The team are able
> decide which behaviours 'learnt' by Cog are the fittest to
> proceed to the next generation, but will these kind of direct
> changes encourage conscious behaviour? Evolution has worked in
> strange ways to mould our intelligent behaviour, so will our
> ideas about fitness take us on a path away from how it formed our
> conscious mind?

Henderson:
Dennett does not say that the behaviours and design innovations
from a previous incarnation of Cog will necessarily be filtered to
remove 'undesirable' ones, only that ones constituting 'obvious
shortcomings' will be remedied. These are the robotic equivalent of
the sorts of defects in humans that lead to early death before
child-bearing age: in evolution too, obvious shortcomings do not
persist from one generation to another. Indeed, in such an
intimately parallel system it would be dangerous to consider each
design innovation in isolation -- these are not modules that can be
removed or implanted with impunity, as the effect on the rest of the
system may be incalculable. It should also be noted that humans
have many weaknesses and faults, and these should be seen in Cog as
challenges that the robot must learn to cope with and amend itself.
Artificially creating a perfect being by purging Cog of its imperfections
from generation to generation will not give us the insight we seek into
human intelligence. The experimenters may teach Cog, but they must
not try to 'learn' Cog.



This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:30 BST