Re: Dennett: Making Conscious Robots

From: Henderson Ian (irh196@ecs.soton.ac.uk)
Date: Thu May 24 2001 - 13:05:06 BST


In reply to: Bon Mo: "Re: Dennett: Making Conscious Robots"

>> DENNETT:
>> Might a conscious robot be "just" a stupendous assembly of more
>> elementary artifacts--silicon chips, wires, tiny motors and cameras--or
>> would any such assembly, of whatever size and sophistication, have
>> to leave out some special ingredient that is requisite for consciousness?

> Mo:
> A robot's silicon chips, wires, tiny motors and cameras can all represent
> parts of the human anatomy, so you can build a robot to mimic human
> functions. A consciousness is different, a human clone can have exactly
> the same functionality as its cloned image. Someone's consciousness
> is individual to themselves.

Henderson:
As are an individual's eyes, ears, and limbs. We cannot see through someone
else's eyes just as much as we cannot think through their brains. What is
so special about the brain that it cannot in theory be implemented
artificially?

> Mo:
> If you saw a consciousness as a group of rules and facts that one
> follows, you could assume that probabilistically there is someone else out
> there who has the same consciousness as you.

Henderson:
Consciousness is not a state of mind -- it is a property that the
human brain happens to have. It is entirely probable that another
human being may be thinking the same *thoughts* as you, or
experiencing the same *feelings*, but these phenomena are
just transitory products of consciousness: they do not constitute
consciousness itself. Consciousness can perhaps be defined as the
ability of a mind to be aware of its own thoughts and surroundings.

>> DENNETT:
>> So there might be straightforward reasons of engineering that showed
>> that any robot that could not make use of organic tissues of one sort or
>> another within its fabric would be too ungainly to execute some task
>> critical for consciousness.

> Mo:
> Take for example a human losing a limb, and getting a replacement
> artificial limb. The brain still thinks that there is a limb
> there, so the brain releases neural chemicals, the artificial
> limb needs to register the inputs. At present these mechanical
> limbs are frustrating and difficult to use. The digital circuitry
> of the limb, cannot resolve the change of electrical, and
> chemical pulses, and why and how they affect the limb. This is
> why I believe, that a robot cannot take any real advantage of
> organic tissue, because humans cannot make efficient use of
> mechanical aids. So why should a robot be able to make efficient
> use of an organic structure?

Henderson
As Mo says, there may be practical problems incorporating an organic
structure into a robot such as Cog. However, if the functioning of the
organic structure is completely understood, I see no theoretical reason why
it could not be incorporated in a composite system; if we've succeeded in
reverse engineering the organic tissue, then we have every right to use it
to help us in robotics. It is not like an organ transplant, where
potentially the surgeon need not understand the functioning of the organ he
or she is transplanting in order to be able to connect it up to other parts
of the body.

>> DENNETT:
>> Making a fully-equipped conscious adult robot might just be too much
>> work. It might be vastly easier to make an initially unconscious or
>> nonconscious"infant" robot and let it "grow up" into consciousness, more
>> or less the way we all do.

> Mo:
> A human baby may have built in survival skills, such as
> keeping warm and feeding, but a more interesting realm is the
> infinite ability to learn and the motivation required to learn.
> The majority of babies may not be capable of doing much at birth,
> but it can learn from examples. From its experiences of how
> things are done, the mistakes it has made and from its general
> education. The brain processes and stores these facts and rules,
> any scenarios that require any thought be that conscious or
> unconscious require making inductive hypothesis, based on the
> knowledge that the brain holds. These hypothesis, may or may not
> be correct, and with continualled learning of new facts and
> rules, they may change. This cumulative scaling up of the brain
> allows us to gain insights into new areas of knowledge we
> previously knew nothing about. This knowledge gathers throughout
> your life until the day you die, and so these say 80 years of
> learning will be extremely difficult to program into a robot in
> less time.

Henderson:
The vast majority of learning occurs within the first few years of
life, so I don't see how this assertion holds. We do not learn at a
uniform rate throughout our lives, and if Cog behaves exactly like
a three year old child for instance, we would have trouble denying
that it was conscious.

>> DENNETT:
>> Since its eyes are video cameras mounted on delicate, fast-moving
>> gimbals, it might be disastrous if Cog were inadvertently to punch itself
>> in the eye, so part of the hard-wiring that must be provided in advance
>> is an "innate" if rudimentary "pain" or "alarm" system to serve roughly
>> the same protective functions as the reflex eye-blink and pain-avoidance
>> systems hard-wired into human infants.

> Mo:
> Human infants do not have hard-wired pain avoidance systems.
> When they are in pain, they may cry, but they learn more by trail
> and error. You don't know something might cause you pain until it
> happens, infants are naturally curious about their strange, new
> environments and will try to explore all the objects around them.
> It is up to the adults around them to try to educate the infants
> and to restrain them from hurting themselves. Cog's hardwiring
> may stop it from ever "hitting" itself, but surely it cannot
> learn anything from that. All it is to Cog are built in rules
> that it must obey.

Henderson:
By saying that pain-avoidance systems are hardwired in humans, Dennett
doesn't mean that they are born with the ability to recognise kettles, fires
and needles as objects that need to be avoided or handled with care because
they cause pain. As Mo says, this *does* have to be learnt. Dennett means
that once a child feels pain, it is programmed to respond to the pain in a
certain way (by employing avoidance tactics), for instance by removing its
hand from a fire. Pain-avoidance is the rule by which children learn how to
interact with their environment without damaging themselves. This instinct
for self-preservation is innate to all animals, not just humans.

>> DENNNET:
>> One talent that we have hopes of teaching to Cog is a rudimentary
>> capacity for human language.

> Mo:
> Language is essential for dynamic interaction, the idea of
> Cog being able to learn phrases is a problem for neural networks.
> Modern day mobile phones have a capacity of storing and
> processing names for speech recognition. On a larger scale
> problem such as with Cog the same fundamentals of learning
> algorithms are imposed, the only change is a larger memory
> capacity to store and process the phrases. A language is a
> complex thing to learn, there are alot of grammar rules that
> exist for a start and there are always new words that emerge. The
> hardest part though is for the robot to understand what the phrase
> actually refers to. If a human went up to Cog and trained it to
> say "Goodnight" after the human said the same thing. Cog may
> realise to reply on demand, but does it know what the human
> means. When someone says "Goodnight" it could be for a number of
> factors, such as you are tired and want to go to sleep, or you
> have finished work and just being polite before you leave. A
> phrase can refer to more than one thing. So a meaning is very
> complicated it has multiple referents.

Henderson:
It is true that abstract concepts such as 'good' will indeed be difficult to
grasp at first, but this problem is just the same for human children. For
instance, a young child may not understand what behaviour is appropriate
when an adult tells it to 'be good'; it needs to observe many concrete
examples of 'good' behaviour before it can understand what 'good' means. Cog
will probably start with the grounding of much more basic symbols, such as
those that represent tangible objects it can hold and play with.



This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:31 BST