Re: Dennett: Making Conscious Robots

From: McIntosh Chris (csm197@ecs.soton.ac.uk)
Date: Tue May 08 2001 - 18:17:25 BST


http://cogsci.soton.ac.uk/~harnad/Papers/Py104/dennett.rob.html

McIntosh:
Daniel Dennett has written a paper about his plans to build to build a
humanoid robot, ‘Cog’.

> DENNETT:
>It is unlikely, in my opinion, that anyone will ever make a robot that is
>conscious in just the way we human beings are.

McIntosh:
There’s no unique way in which humans are conscious, as no human is
conscious in just the same way as any other (nor in the same way from one
moment to the next). But it would be surprising if we managed to create a
conscious robot, yet couldn’t eventually create one with a human level of
awareness. The sensory inputs need not be exactly the same – the lack of
vision in some humans for example gives them a somewhat different
conscious experience, but their intelligence is not greatly affected by
this.

Unfortunately Dennett doesn’t give a clear definition of ‘robot’. The
dictionary says that a robot is a mechanical device ie. one whose behaviour
can be fully explained by the laws of physics. But then what about the
suggestion that anything that obeys these laws can be computationally
simulated. How could a simulation have pain or any other conscious
experience, and hence how if the suggestion is right could we have a
conscious robot (or any other consciousness)?

[Later substitute for above paragraph:]

[Unfortunately Dennett doesn't give a clear definition of 'robot'.
He's happy to talk about a conscious robot, so are humans robots or is a
robot distinguished by different means and materials of construction?
Personally I would prefer to reserve 'robot' in keeping with the
common intuitive understanding of it, as a word to tell apart certain
non-conscious entities from those that are conscious.]

Dennett looks at five objections to the possibility of a conscious robot.
Firstly:

> DENNETT:
>(1) Robots are purely material things, and consciousness requires
>immaterial mind-stuff. (Old-fashioned dualism)
>over the centuries, every other phenomenon of initially "supernatural"
>mysteriousness has succumbed to an uncontroversial explanation within the
>commodious folds of physical science... magnetism is one of the best
>understood of physical phenomena, strange though its manifestations are.
>The "miracles" of life itself, and of reproduction, are now analyzed into
>the well-known intricacies of molecular biology. Why should consciousness
>be any exception? .. Why should the brain be the only complex physical
>object in the universe to have an interface with another realm of being?

McIntosh:
Consciousness has always been a slightly different puzzle to the likes of
gravity and magnetism, as I doubt these were ever attributed to another
realm.
Our understanding of the laws of physics doesn’t extend to consciousness,
it’s not clear though what discovery would count as another aspect of our
universe and what as ‘another realm’.

> DENNETT:
>(2) Robots are inorganic (by definition), and consciousness can exist only
>in an organic brain.
>(But) as biochemistry has shown in matchless detail, the powers of organic
>compounds are themselves all mechanistically reducible and hence
>mechanistically reproducible at one scale or another in alternative
>physical media;

McIntosh:
Biochemistry has not shown that consciousness can be reproduced in an
alternative media. And some organic compounds may have specific
chemical properties that couldn’t be reproduced.

> DENNETT:
>but it is conceivable--if unlikely-- that the sheer speed and compactness
>of biochemically engineered processes in the brain are in fact
>unreproducible in other physical media

McIntosh:
If these biochemically engineered processes are computational then they are
not only reproducible, but reproducible at far greater speeds, by computers.
But the brain does more than symbol manipulation and its additional powers
may be unreproducible in some other physical media, irrespective of speed
and compactness.

> DENNETT:
>(3) Robots are artifacts, and consciousness abhors an artifact; only
>something natural, born not manufactured, could exhibit genuine
>consciousness. ...it cannot be because being born gives a complex of cells
>a property (aside from that historic property itself) that it could not
>otherwise have "in principle".

McIntosh:
Relates to Granny objection 6, ‘People have real-time histories; computers
only have a pseudo-past’. Dennett is right to discard this suggestion.

> DENNETT:
>There might, however, be a question of practicality. We have just seen
>how, as a matter of exigent practicality, it could turn out after all that
>organic materials were needed to make a conscious robot. For similar
>reasons, it could turn out that any conscious robot had to be, if not
>born, at least the beneficiary of a longish period of infancy. Making a
>fully-equipped conscious adult robot might just be too much work. It might
>be vastly easier to make an initially unconscious or nonconscious "infant"
>robot and let it "grow up" into consciousness, more or less the way we all
>do... a certain sort of process is the only practical way of designing all
>the things that need designing in a conscious being.

McIntosh:
It’s hard to see how growth could make matters significantly easier for
the
designer. Dennett must still make plans for his adult Cog but will also need
to overcome the extremely complicated growth process.
Understanding how to introduce the processes in the brain that cause
consciousness must be an important starting point.

> DENNETT:
>(4) Robots will always just be much too simple to be conscious.
>After all, a normal human being is composed of trillions of parts (if we
>descend to the level of the macromolecules), and many of these rival in
>complexity and design cunning the fanciest artifacts that have ever been
>created. We consist of billions of cells, and a single human cell contains
>within itself complex "machinery" that is still well beyond the
>artifactual powers of engineers. We are composed of thousands of different
>kinds of cells, including thousands of different species of symbiont
>visitors, some of whom might be as important to our consciousness as
>others are to our ability to digest our food! If all that complexity were
>needed for consciousness to exist, then the task of making a single
>conscious robot would dwarf the entire scientific and engineering
>resources of the planet for millennia. And who would pay for it?

McIntosh:
If it’s possible for consciousness to understand itself it will happen
Eventually in the course of scientific progress, and the cost won’t be an
important factor.

> DENNETT:
>Everywhere else we have looked, we have found higher-level commonalities
>of function that permit us to substitute relatively simple bits for
>fiendishly complicated bits. Artificial heart valves work really very
>well, but they are orders of magnitude simpler than organic heart valves…
>Nobody ever said a prosthetic eye had to see as keenly, or focus as fast,
>or be as sensitive to color gradations as a normal human (or other animal)
>eye in order to "count" as an eye. If an eye, why not an optic nerve (or
>acceptable substitute thereof), and so forth, all the way in?
>…Some (Searle, 1992, Mangan, 1993) have supposed, most improbably, that
>this proposed regress would somewhere run into a non- fungible medium of
>consciousness, a part of the brain that could not be substituted on pain
>of death or zombiehood. There is no reason at all to believe that some one
>part of the brain is utterly irreplacible by prosthesis, provided we allow
>that some crudity, some loss of function, is to be expected in most
>substitutions of the simple for the complex. An artificial brain is, on
>the face of it, as "possible in principle" as an artificial heart, just
>much, much harder to make and hook up. Of course once we start letting
>crude forms of prosthetic consciousness--like crude forms of prosthetic
>vision or hearing--pass our litmus tests for consciousness (whichever
>tests we favor) the way is open for another boring debate, over whether
>the phenomena in question are too crude to count

McIntosh:
Certainly most internal organs could be artificially replaced without loss
of function. But perhaps a part of the brain could only be effectively
replaced by a substitute equally capable of consciousness. The most
suitable brain parts for artificial substitution may be specialist areas of
which we have little or no awareness such as those dealing with early
vision processes. Replacement of other parts might not be too serious
depending on the scale of replacement and whether other brain areas were
able to adapt and take on some of the lost functionality. However, this
reduction in the size of the original brain would diminish intelligence.
Too many nonconscious artificial components might lead to problems in
passing the Turing Test.

Artificial substitution of the rest of the sensorimotor system may be
possible. Already cochlear implants bypass the workings of the inner ear,
digitising sound information and signalling directly to the auditory
nerve. I doubt that the nerve itself plays a role in consciousness, so an
artificial hearing system could reach directly to the brain. Artificial
vision will be a much greater challenge but efforts are already underway
to give the blind some kind of artificial vision. And maybe the only
barrier to the replacement of nerves that give touch sensation could be
the vast scale of rewiring.

I also doubt that the main length of nerve between a nerve ending and the
brain plays a role in consciousness, as for example the nature of sensation
is not affected by the length of nerve to the brain. There could be nothing
special about nerve endings either, since the brain couldn’t know what, if
anything, the nerve endings were ‘feeling’.

There is a problem though. If nerve endings don’t feel anything and the
brain only gets the equivalent of information signals, how could it ever
attain any understanding of the world? Well, intelligence is a phenomenon
of consciousness, so animals with brains that generate more consciousness
can be more intelligent and will tend to be favoured by natural selection.
The genes in a brain that could generate different conscious experiences
that correspond appropriately to varied stimuli in the world would have a
big advantage. Sufficiently advanced brains can generate unique conscious
experiences from very similar patterns of sensory input.

However the nature of a particular experience may be somewhat arbitrary,
with genetic instructions only guiding brain development and the
conscious experience that results from certain sensory inputs. Sweet foods
for example are generally more palatable but certain acquired tastes and
preferences may develop for arbitrary reasons.

So even if cells in the nerve endings really do feel something, and are
conscious at some low level, I would argue that much of the sensorimotor
system could be artificially replaced. I think that the brain could make
sense of the world whether stimulated by nerves or some artificial
replacement. This would permit the unlikely possibility that although the
brain is real the inputs could be entirely virtual.

> DENNETT:
>Cog is just about life-size--that is, about the size of a human adult. Cog
>has no legs, but lives bolted at the hips, you might say, to its stand. It
>has two human-length arms, however, with somewhat simple hands on the
>wrists. It can bend at the waist and swing its torso, and its head moves
>with three degrees of freedom just about the way yours does. It has two
>eyes, each equipped with both a foveal high-resolution vision area and a
>low-resolution wide-angle parafoveal vision area, and these eyes saccade
>at almost human speed. That is, the two eyes can complete approximately
>three fixations a second, while you and I can manage four or five.

McIntosh:
Unfortunately for Dennett this human-like appearance won’t fool anyone
into believing that Cog is conscious.

> DENNETT:
>Cog will not be an adult at first, in spite of its adult size. It is being
>designed to pass through an extended period of artificial infancy, during
>which it will have to learn from experience, experience it will gain in
>the rough-and-tumble environment of the real world.

McIntosh:
The ability to learn from experience is essential for a high level of
of intelligence.

> DENNETT:
>sensitive membranes will be used on its fingertips and elsewhere, and,
>like human tactile nerves, the "meaning" of the signals sent along the
>attached wires will depend more on what the central control system "makes
>of them" than on their "intrinsic" characteristics. A gentle touch,
>signalling sought- for contact with an object to be grasped, will not
>differ, as an information packet, from a sharp pain, signalling a need for
>rapid countermeasures. It all depends on what the central system is
>designed to do with the packet, and this design is itself indefinitely
>revisable--something that can be adjusted either by Cog's own experience
>or by the tinkering of Cog's artificers.

McIntosh:
Humans have different nerves in the skin to detect pressure, heat, cold and
pain. Touch is a very important sense, and Cog’s limitation in this regard
would be very severe.

> DENNETT:
>One of its most interesting "innate" endowments will be software for
>visual face recognition. Faces will "pop out" from the background of other
>objects as items of special interest to Cog. It will further be innately
>designed to "want" to keep it's "mother's" face in view, and to work hard
>to keep "mother" from turning away

McIntosh:
Humans are especially good at face recognition. Cog would be
unconvincing without special skills in this regard.

> DENNETT:
>Cog will actually be a multi-generational series of ever improved models,
>but of course that is the way any complex artifact gets designed. Any
>feature that is not innately fixed at the outset, but does get itself
>designed into Cog's control system through learning, can then often be
>lifted whole (with some revision, perhaps) into Cog-II, as a new bit of
>innate endowment designed by Cog itself--or rather by Cog's history of
>interactions with its environment.

McIntosh:
Even when individual software modules have been thoroughly planned and
tested, there can be problems when they are introduced as part of a larger
system. The haphazard way in which Cog’s features are expected to develop
does not lend itself to subsequent extraction of those features. It’s also
unclear how understanding of inputs that would occur in the extracted
module could be made consistent with understanding in the new brain.

> DENNETT:
>How plausible is the hope that Cog can retrace the steps of millions of
>years of evolution in a few months or years of laboratory exploration?...
>The acquired design innovations of Cog-I can be immediately transferred to
>Cog-II, a speed-up of evolution of tremendous, if incalculable, magnitude.
>Moreover, if you bear in mind that, unlike the natural case, there will be
>a team of overseers ready to make patches whenever obvious shortcomings
>reveal themselves, and to jog the systems out of ruts whenever they enter
>them, it is not so outrageous a hope.

McIntosh:
The hope for such progress over a few years was not plausible, ten years
later Cog still shows no signs of consciousness. If Dennett is using
unsuitable materials that won’t allow for consciousness then no amount of
this enhanced natural selection will help Cog become conscious.

> DENNETT:
>Obviously this will not work unless the team manages somehow to give Cog a
>motivational structure that can be at least dimly recognized, responded
>to, and exploited by naive observers. In short, Cog should be as human as
>possible in its wants and fears, likes and dislikes.
>Cog won't work at all unless it has its act together in a daunting number
>of different regards. It must somehow delight in learning, abhor error,
>strive for novelty, recognize progress. It must be vigilant in some
>regards, curious in others, and deeply unwilling to engage in self-
>destructive activity. While we are at it, we might as well try to make it
>crave human praise and company, and even exhibit a sense of humor.

McIntosh:
A surprising set of objectives from someone who doubts that robots could be
conscious in the same way as humans.
I doubt though that Cog could convince in any of these regards without a
considerable proportion of consciousness-enabled brain. The characteristics
couldn’t be precisely programmed, the ‘sense of humor’ would be a
particular challenge.

> DENNETT:
>A recent criticism of "strong AI" that has received quite a bit of
>attention is the so-called problem of "symbol grounding" (Harnad, 1990).
>It is all very well for large AI programs to have data structures that ?
>purport to refer to Chicago, milk, or the person to whom I am now talking,
>but such imaginary reference is not the same as real reference, according
>to this line of criticism. These internal "symbols" are not
>properly "grounded" in the world, and the problems thereby eschewed by
>pure, non- robotic, AI are not trivial or peripheral. …I submit that Cog
>moots the problem of symbol grounding, without having to settle its status
>as a criticism of "strong AI". If the day ever comes for Cog to comment to
>anybody about Chicago, the question of whether Cog is in any position to
>do so will arise for exactly the same reasons, and be resolvable on the
>same considerations, as the parallel question about the reference of the
>word "Chicago" in the idiolect of a young child.

McIntosh:
I'm not convinced of the special role for a sensorimotor system in
'grounding meaning in the real world'. I doubt that the brain
(which must be situated) can know anything about the world it inhabits
- maybe it can only imagine one based upon its inputs, which could
theoretically be virtual. However, since reproduction is impossible in
a virtual world, instructions necessary for the development of
conscious brains must have evolved thanks to situated, though maybe not
grounding, sensorimotor systems.

> DENNETT:
>Another claim is that nothing could properly "matter" to an artificial
>intelligence, and mattering (it is claimed) is crucial to consciousness.
>Cog will be equipped with some "innate" but not at all arbitrary
>preferences, and hence provided of necessity with the concomitant capacity
>to be "bothered" by the thwarting of those preferences, and "pleased" by
>the furthering of the ends it was innately designed to seek. Some may want
>to retort: "This is not real pleasure or pain, but merely a simulacrum."
>Perhaps, but on what grounds will they defend this claim? Cog may be said
>to have quite crude, simplistic, one-dimensional pleasure and pain,
>cartoon pleasure and pain if you like, but then the same might also be
>said of the pleasure and pain of simpler organisms--clams or houseflies,
>for instance. Most, if not all, of the burden of proof is shifted by Cog,
>in my estimation. The reasons for saying that something does matter to Cog
>are not arbitrary; they are exactly parallel to the reasons we give for
>saying that things matter to us and to other creatures. …more than a few
>participants in the Cog project are already musing about what obligations
>they might come to have to Cog, over and above their obligations to the
>Cog team.

McIntosh:
The concept of mattering is just a distraction, it’s not something that
could be tested. It describes the interest a conscious entity has in a
particular event in terms of its well-being (even if it doesn’t know about
that event). And there is no comparison between the pain of a simple
organism, which really hurts, to the data that flashes round in Cog’s
electronic circuits. Only if it got near to passing the robotic or penpal
Turing Test would the burden of proof be shifted by Cog.

> DENNETT:
>if a robot were really conscious, we would have to be prepared to believe
>it about its own internal states. The information visible on the banks of
>monitors, or gathered by the gigabyte on hard disks, will be at the outset
>almost as hard to interpret, even by Cog's own designers, as the
>information obtainable by such "third- person" methods as MRI and CT
>scanning in the neurosciences. Cog will be designed from the outset to
>redesign itself as much as possible, there is a high probability that the
>designers will simply lose the standard hegemony of the artificer ("I made
>it, so I know what it is supposed to do, and what it is doing now!"). In
>fact, I would gladly defend the conditional prediction: if Cog develops to
>the point where it can conduct what appear to be robust and well-
>controlled conversations in something like a natural language, it will
>certainly be in a position to rival its own monitors (and the theorists
>who interpret them) as a source of knowledge about what it is doing and
>feeling, and why.

McIntosh:
An ability to consistently avoid the frame problem (knowledge / behaviour
breakdown) in coherent conversation, even if it doesn’t quite pass the
Turing Test, would suggest that Cog is the best source of information about
what it's feeling.



This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:30 BST