Re: Dennett: Making a Conscious Robot

From: Blakemore, Philip (pjb397@ecs.soton.ac.uk)
Date: Mon May 15 2000 - 12:20:57 BST


1. ARE CONSCIOUS ROBOTS POSSIBLE "IN PRINCIPLE"?

> DENNETT:
> It is unlikely, in my opinion, that anyone will ever make a robot
> that is conscious in just the way we human beings are.

> Edwards:
> Dennett expresses his opinion here. I, however, am undecided about it; robots
> could have a different kind of consciousness (perhaps a hive mind for a
> network?).

A different kind of consciousness? I thought we are trying to create a
machine capable of human consciousness? A machine that merely acts
something like consciousness is not fully humanly conscious.

> DENNETT:
> (1) Robots are purely material things, and consciousness requires
> immaterial mind-stuff. (Old-fashioned dualism)

> It continues to amaze me how attractive this position still is to many people.
> I would have thought a historical perspective alone would make this view seem
> ludicrous: over the centuries, every other phenomenon of
> initially "supernatural" mysteriousness has succumbed to an uncontroversial
> explanation within the commodious folds of physical science. [...] Why should
> consciousness be any exception? Why should the brain be the only complex
> physical object in the universe to have an interface with another realm of
> being?

> Edwards:
> I totally agree with this. It should only be a matter of time until we can
> explain this, so far, unexplainable phenomenon of consciousness.

Should? We have no guarantee of that. I am not saying this is
impossible. 30 years ago, no-one would have believed the level of
technology available now. However, are we any closer to T3 after 50 years
for research? Artificial Intelligence has come along way in creating fast
computers capable of storing thousands of options, using the most likely
to find a goal. New implementations, such as Neural Networks, have
provided humanlike "machine" behaviour. As for T3, a robot capable of
interacting with the real world, we are no closer. In my opinion,
consciousness is governed by the parts that we are made of; the exact
atoms and combinations of them provide a unique way to create life, minds
and consciousness. We will not be able to create consciousness until we
understand every part of our brain and how it works. Consciousness is a
deeper part of the brain and central to human existence.

> DENNETT:
> I suspect that dualism would never be seriously considered if there
> weren't such a strong undercurrent of desire to protect the mind from science,
> by supposing it composed of a stuff that is in principle uninvestigatable by
> the methods of the physical sciences.

> Edwards:
> If you draw the line for scientific advancement here, why bother going this
> far at all? What would these people think if it were explained?

I agree with this. You cannot "draw the line", in case someone finds
something out. I'm saying the likelihood of finding the key to
consciousness to just mind bogglingly unlikely!

> DENNETT:
> (2) Robots are inorganic (by definition), and consciousness can exist
> only in an organic brain.

> Vitalism is deservedly dead; as biochemistry has shown in matchless detail,
> the powers of organic compounds are themselves all mechanistically reducible
> and hence mechanistically reproducible at one scale or another in alternative
> physical media; but it is conceivable--if unlikely-- that the sheer speed and
> compactness of biochemically engineered processes in the brain are in fact
> unreproducible in other physical media (Dennett, 1987). So there might be
> straightforward reasons of engineering that showed that any robot that could
> not make use of organic tissues of one sort or another within its fabric would
> be too ungainly to execute some task critical for consciousness.

> Is a robot with "muscles" instead of motors a robot within the meaning of the
> act? If muscles are allowed, what about lining the robot's artificial retinas
> with genuine organic rods and cones instead of relying on relatively clumsy
> color-tv technology?

> Edwards:
> This to be the question of which Turing test (t1 to T5) does the robot have to
> pass to be called conscious? Does it have to be organic? Or just be
> functionally the same as us?

Does it have to be organic? A robot cannot be by definition. If the
functionality of a robot appeared to have consciousness, how can we prove
that? We can't. We can either take the robot's word for it, or see how
it was built. All technology, at the moment, can only achieve what has
been programmed into it. A machine, toaster, video playing, stereo or
robot can only do what a human has told it to do. For consciousness to
exist in robots, a level of independent, high-level extraction of data is
necessary.

> Edwards:
> I believe that a line must be drawn between human consciousness and any
> other kind.

Then you must agree with my point at the beginning of this article!

> Edwards:
> Is a manufactured robot that is identical to a human, a human? Or a robot?
> How can you tell, if it's identical? If we do draw a line
> somewhere, do we get a new discrimination, consciousness-ism? If no line is
> drawn, all types of consciousness must be equal, is this true?

Interesting point. If identical, no proof can be made. Since we assume
each human has consciousness, the robot will pass the test as we see no
difference.

Consciousnessism!! Definitely made me smile. An ant must be conscious to
defend it's home or to find food. However, are you saying a human
consciousness has an identical level of consciousness. Impossible.

> DENNETT:
> (3) Robots are artifacts, and consciousness abhors an artifact; only
> something natural, born not manufactured, could exhibit genuine consciousness.
>
> Consider the general category of creed we might call origin essentialism:
> only wine made under the direction of the proprietors of
> Chateau Plonque counts as
> genuine Chateau Plonque; only a canvas every blotch on which was caused by the
> hand of Cezanne counts as a genuine Cezanne; only someone "with Cherokee
> blood" can be a real Cherokee. There are perfectly respectable reasons,
> eminently defensible in a court of law, for maintaining such distinctions, so
> long as they are understood to be protections of rights growing out of
> historical processes. If they are interpreted, however, as indicators
> of "intrinsic properties" that set their holders apart from their otherwise
> indistinguishable counterparts, they are pernicious nonsense. Let us dub
> origin chauvinism the category of view that holds out for some mystic
> difference (a difference of value, typically) due simply to such a fact about
> origin. Perfect imitation Chateau Plonque is exactly as good a wine as the
> real thing, counterfeit though it is, and the same holds for the fake Cezanne,
> if it is really indistinguishable by experts. And of course no person is
> intrinsically better or worse in any regard just for having or not having
> Cherokee (or Jewish, or African) "blood."

> Edwards:
> Another example could be an artificially created embryo being grown, and
> then born from, an artificial womb. Is this still an artifact?

An embryo is not an artifact. Whether artificially created or not, an
embryo (by definition) has the capability of life itself.

> DENNETT:
> If consciousness abhors an artifact, it cannot be because being born
> gives a complex of cells a property (aside from that historic property itself)
> that it could not otherwise have "in principle". There might, however, be a
> question of practicality. We have just seen how, as a matter of exigent
> practicality, it could turn out after all that organic materials were needed
> to make a conscious robot. For similar reasons, it could turn out that any
> conscious robot had to be, if not born, at least the beneficiary of a longish
> period of infancy. Making a fully- equipped conscious adult robot might just
> be too much work. It might be vastly easier to make an initially unconscious
> or nonconscious "infant" robot and let it "grow up" into consciousness, more
> or less the way we all do. This hunch is not the disreputable claim that a
> certain sort of historic process puts a mystic stamp of approval on its
> product, but the more interesting and plausible claim that a certain sort of
> process is the only practical way of designing all the things that need
> designing in a conscious being.

> Edwards:
> I agree with Dennett that a period of infancy would be useful for a new
> consciousness. If only so it, and us, can understand how we achieve
> consciousness. However, a robot could conceivably be constructed with a
> memory, a history, and hence would have consciousness already.

By having knowledge about something, say Niagra Falls, is nothing like the
experience of seeing it with your own eyes and hearing the rush of water.

> Edwards:
> This knowledge would either have to be worked out by hand (a lengthy
> process), or a different robot could go through infancy and the memory
> copied.

I have to agree. If all memory and consciousness is just data from our
brain, of course it can be transferred. However, the personal experience
of something does have the final say.

> DENNETT:
> If the best the roboticists can hope for is the creation of some
> crude, cheesy, second-rate, artificial consciousness, they still win. Still,
> it is not a foregone conclusion that even this modest goal is reachable.
>
> (4) Robots will always just be much too simple to be conscious.
>
> If no other reason can be found, this may do to ground your skepticism about
> conscious robots in your future, but one shortcoming of this last reason is
> that it is scientifically boring. If this is the only reason there won't be
> conscious robots, then consciousness isn't that special, after all.

> Edwards:
> If the robot is too simple, make it more complex. No one said it has to be a
> certain size. Surely, with no size limit and molecular robotics,
> a sufficiently complex machine can be built.

If we can't make an artificial ant have consciousness, we can't expect to
build a machine with human consciousness.

> DENNETT:
> Another shortcoming with this reason is that it is dubious on its
> face. Everywhere else we have looked, we have found higher-level commonalities
> of function that permit us to substitute relatively simple bits for fiendishly
> complicated bits. Artificial heart valves work really very well, but they are
> orders of magnitude simpler than organic heart valves. Artificial ears and
> eyes that will do a serviceable (if crude) job of substituting for lost
> perceptual organs are visible on the horizon. Nobody ever said a prosthetic
> eye had to see as keenly, or focus as fast, or be as sensitive to colour
> gradations as a normal human (or other animal) eye in order to "count" as an
> eye. If an eye, why not an optic nerve (or acceptable substitute thereof),
> and so forth, all the way in?

> Some (Searle, 1992, Mangan, 1993) have supposed, most improbably, that this
> proposed regress would somewhere run into a non- fungible medium of
> consciousness, a part of the brain that could not be substituted on pain of
> death or zombiehood. Once the implications of that view are spelled
> out (Dennett, 1993a, 1993b), one can see that it is a non-starter. There is no
> reason at all to believe that some one part of the brain is utterly
> irreplacible by prosthesis,

> Edwards:
> Agreed.

Seconded! (But highly improbable.)

> DENNETT:
> provided we allow that some crudity, some loss of function, is to be
> expected in most substitutions of the simple for the complex. An artificial
> brain is, on the face of it, as "possible in principle" as an artificial
> heart, just much, much harder to make and hook up. Of course once we start
> letting crude forms of prosthetic consciousness--like crude forms of
> prosthetic vision or hearing--pass our litmus tests for
> consciousness (whichever tests we favor) the way is open for another boring
> debate, over whether the phenomena in question are too crude to count.

> Edwards:
> There could be the possibility that, with this slight loss of function, when
> all parts of the brain have been substituted there is too much simplification
> and so it might not be conscious.

We are back to the point that once a machine is said to have
consciousness, no proof can be made. For myself, anything short of human
"real" consciousness is a fallacy. I certain agree that artificial parts
humans can be used to replace new ones. All those parts that humans have
replaced are secondary for consciousness. Try simplifying the brain - we
don't even know how it works, let alone to simplify it.

2. THE COG PROJECT: A HUMANOID ROBOT

> DENNETT:
> A much more interesting tack to explore, in my opinion, is simply to
> set out to make a robot that is theoretically interesting independent of the
> philosophical conundrum about whether it is conscious. Such a robot would have
> to perform a lot of the feats that we have typically associated with
> consciousness in the past, but we would not need to dwell on that issue from
> the outset. Maybe we could even learn something interesting about what the
> truly hard problems are without ever settling any of the issues about
> consciousness.

> Edwards:
> Here, Dennett describes Cog as an attempt to produce a humanoid robot
> capable of many of the things we take for granted, like speech, basic
> movement, hand-eye coordination, learning, etc.

Fine.

> DENNETT:
> Such a project is now underway at MIT. Under the direction of
> Professors Rodney Brooks and Lynn Andrea Stein of the AI Lab, a group of
> bright, hard-working young graduate students are labouring as I speak to
> create Cog, the most humanoid robot yet attempted, and I am happy to be a part
> of the Cog team. Cog is just about life-size--that is, about the size of a
> human adult. Cog has no legs, but lives bolted at the hips, you might say, to
> its stand. It has two human-length arms, however, with somewhat simple hands
> on the wrists. It can bend at the waist and swing its torso, and its head
> moves with three degrees of freedom just about the way yours does. It has two
> eyes, each equipped with both a foveal high-resolution vision area and a low-
> resolution wide-angle parafoveal vision area, and these eyes saccade at
> almost human speed.

> Cog will not be an adult at first, in spite of its adult size. It is being
> designed to pass through an extended period of artificial infancy, during
> which it will have to learn from experience, experience it will gain in the
> rough-and-tumble environment of the real world. Like a human infant, however,
> it will need a great deal of protection at the outset, in spite of the fact
> that it will be equipped with many of the most crucial safety-systems of a
> living being. It has limit switches, heat sensors, current sensors, strain
> gauges and alarm signals in all the right places to prevent it from destroying
> its many motors and joints. [...] A gentle touch, signalling sought- for
> contact with an object to be grasped, will not differ, as an information
> packet, from a sharp pain, signalling a need for rapid countermeasures.
> It all depends on what the central system is designed to do with the packet,
> and this design is itself indefinitely revisable--something that can be
> adjusted either by Cog's own experience or by the tinkering of
> Cog's artificers.

> Edwards:
> Basically Cog will learn as we do when we are young. It has the same
> automatic protection signals that a baby has. The software controlling how
> these signals are processed can be changed indefinitely to produce
> the correct output, either by Cog or the team.

Learning while being young is essential. If humans have to do it, it
seems logical that a machine capable of doing the same things needs to
copy the learning method.

> DENNETT:
> I haven't mentioned yet that Cog will actually be a multi-
> generational series of ever improved models (if all goes well!), but of course
> that is the way any complex artifact gets designed. Any feature that is not
> innately fixed at the outset, but does get itself designed into Cog's control
> system through learning, can then often be lifted whole (with some revision,
> perhaps) into Cog-II, as a new bit of innate endowment designed by Cog itself
> -- or rather by Cog's history of interactions with its environment. [...]
> Although Cog is not specifically intended to demonstrate any particular neural
> net thesis, it should come as no surprise that Cog's nervous system is a
> massively parallel architecture capable of simultaneously training up an
> indefinite number of special-purpose networks or circuits, under various
> regimes.

> Edwards:
> This will speed up the evolution of Cog, by tremendous amounts. A few years
> in a lab may be akin to a million years of organic evolution.

Where are these statistics from? Although machine's can run programs
faster than humans can, Cog is designed to integrate with the real world;
at our speed. Any learning or growth has to do achieved on our time line.

> DENNETT:
> One talent that we have hopes of teaching to Cog is a rudimentary
> capacity for human language. We are going to try to get Cog to build language
> the hard way, the way our ancestors must have done, over thousands of
> generations. Cog has ears (four, because it's easier to get good localization
> with four microphones than with carefully shaped ears like ours!) and some
> special-purpose signal-analysing software is being developed to give Cog a
> fairly good chance of discriminating human speech sounds, and probably the
> capacity to distinguish different human voices. Cog will also have to have
> speech synthesis hardware and software, of course, but decisions have not yet
> been reached about the details. It is important to have Cog as well-equipped
> as possible for rich and natural interactions with human beings.

> Edwards:
> It would be breakthrough enough to get Cog to have a natural language
> conversation with a human, let alone all the other things they hope to do.

True.

> DENNETT:
> Obviously this will not work unless the team manages somehow to give
> Cog a motivational structure that can be at least dimly recognized, responded
> to, and exploited by naive observers. In short, Cog should be as human as
> possible in its wants and fears, likes and dislikes. [...] This is so for many
> reasons, of course. Cog won't work at all unless it has its act together in a
> daunting number of different regards. It must somehow delight in learning,
> abhor error, strive for novelty, recognize progress. It must be vigilant in
> some regards, curious in others, and deeply unwilling to engage in self-
> destructive activity. While we are at it, we might as well try to make it
> crave human praise and company, and even exhibit a sense of humour.

> Edwards:
> Cog must have a purpose in life, or he will not do anything, so he needs some
> goals and preferences.

These goals and preferences can be programmed in by the programmers. Just
as humans are controlled by instinct, emotions and feelings; the same or
different drives can be added to a robot.

> DENNETT:
> It is arguable that every one of the possible virtual machines
> executable by Cog is minute in comparison to a real human brain. In short,
> Cog has a tiny brain. There is a big wager being made: the parallelism made
> possible by this arrangement will be sufficient to provide real-time control
> of importantly humanoid activities occurring on a human time scale. If this
> proves to be too optimistic by as little as an order of magnitude, the whole
> project will be forlorn, for the motivating insight for the project is that by
> confronting and solving actual, real time problems of self-protection, hand-
> eye coordination, and interaction with other animate beings, Cog's artificers
> will discover the sufficient conditions for higher cognitive functions in
> general--and maybe even for a variety of consciousness that would satisfy the
> skeptics.

> Edwards:
> There seems to be quite a risk involved here. How much computation is
> necessary for these simple operations? Can you find out the sufficient
> conditions for higher cognitive functions if Cog cannot perform them,
> because he is too simple?

These simple operations are extremely different to do and involve vast
computational power. If Cog is too simple, it will not know that it was
meant to perform higher cognitive functions, let alone perform them.

> DENNETT:
> It is important to recognize that although the theoretical importance
> of having a body has been appreciated ever since Alan Turing (1950) drew
> specific attention to it in his classic paper, "Computing Machines and
> Intelligence," within the field of Artificial Intelligence there has long been
> a contrary opinion that robotics is largely a waste of time, money and effort.
> According to this view, whatever deep principles of organization make
> cognition possible can be as readily discovered in the more abstract realm of
> pure simulation, at a fraction of the cost. In many fields, this thrifty
> attitude has proven to be uncontroversial wisdom. [...] Closer to home,
> simulations of ingeniously oversimplified imaginary organisms foraging in
> imaginary environments, avoiding imaginary predators and differentially
> producing imaginary offspring are yielding important insights into the
> mechanisms of evolution and ecology in the new field of Artificial Life. So it
> is something of a surprise to find this AI group conceding, in effect, that
> there is indeed something to the skeptics' claim (e.g., Dreyfus and Dreyfus,
> 1986) that genuine embodiment in a real world is crucial to consciousness.
> Not, I hasten to add, because genuine embodiment provides some special vital
> juice that mere virtual- world simulations cannot secrete, but for the more
> practical reason--or hunch--that unless you saddle yourself with all the
> problems of making a concrete agent take care of itself in the real world, you
> will tend to overlook, underestimate, or misconstrue the deepest problems of
> design.

> Edwards:
> I agree that to create an artificial consciousness by interacting and learning
> with/from its environment, it is easier and probably more reliable to create a
> robot than a computer simulation.

Definitely.

> Edwards:
> This is due to the necessary simplification of a computer simulation, which
> may miss some vital component. The computing power necessary to
> accurately model a robot and its environment is far beyond our current
> processing power. Building a robot, however, is not.

We do not have the computational power to perform complete natural
language processing (as admitted earlier). How is this advanced robot
going to get this necessary power from?

> DENNETT:
> Other practicalities are more obvious, or at least more immediately
> evocative to the uninitiated. Three huge red "emergency kill" buttons have
> already been provided in Cog's environment, to ensure that if Cog happens to
> engage in some activity that could injure or endanger a human interactor (or
> itself), there is a way of getting it to stop. But what is the appropriate
> response for Cog to make to the KILL button? If power to Cog's motors is
> suddenly shut off, Cog will slump, and its arms will crash down on whatever is
> below them. Is this what we want to happen? Do we want Cog to drop whatever it
> is holding? What should "Stop!" mean to Cog? This is a real issue about which
> there is not yet any consensus.

> Edwards:
> Could Cog get to a point where it is not acceptable to turn him off? Where he
> might protest, and claim the rights of any conscious being?

Cog is still under the control of the computer program. Cog will not
object unless it is pre-empted to do so.

> DENNETT:
> Let's consider Cog merely as a prosthetic aid to philosophical
> thought-experiments, a modest but by no means negligible role for Cog to
> play.

3. THREE PHILOSOPHICAL THEMES ADDRESSED

> DENNETT:
> A recent criticism of "strong AI" that has received quite a bit of
> attention is the so-called problem of "symbol grounding" (Harnad, 1990). It is
> all very well for large AI programs to have data structures that purport to
> refer to Chicago, milk, or the person to whom I am now talking, but such
> imaginary reference is not the same as real reference, according to this line
> of criticism. These internal "symbols" are not properly "grounded" in the
> world, and the problems thereby eschewed by pure, non- robotic, AI are not
> trivial or peripheral. As one who discussed, and ultimately dismissed, a
> version of this problem many years ago (Dennett, 1969, p.182ff), I would not
> want to be interpreted as now abandoning my earlier view. I submit that Cog
> moots the problem of symbol grounding, without having to settle its status as
> a criticism of "strong AI". Anything in Cog that might be a candidate for
> symbolhood will automatically be "grounded" in Cog's real predicament, as
> surely as its counterpart in any child, so the issue doesn't arise, except as
> a practical problem for the Cog team, to be solved or not, as fortune
> dictates. If the day ever comes for Cog to comment to anybody about Chicago,
> the question of whether Cog is in any position to do so will arise for exactly
> the same reasons, and be resolvable on the same considerations, as the
> parallel question about the reference of the word "Chicago" in the idiolect
> of a young child.

> Edwards:
> But does Cog understand what Chicago is? Or is its explanation or meaning a
> group of squiggles to be called upon when needed? Does it really matter, if
> Cog can perform and interact as well as expected? Does a child know what
> Chicago is? Or does it just remember what it has been told?

For an understanding of Chicago, you need to define what level of
understanding passes or fails the test. Is it where it is on the globe,
what restaurants are there, what people live there or the experience of
being there?

Does it really matter? Yes, you need to know what you want a T3 robot to
do before building it! A child isn't expected to know a great deal of
knowledge, but probably would be enough to know the place exists.

> DENNETT:
> Another claim that has often been advanced, most carefully by
> Haugeland (1985), is that nothing could properly "matter" to an artificial
> intelligence, and mattering (it is claimed) is crucial to consciousness.
> Haugeland restricted his claim to traditional GOFAI systems, and left robots
> out of consideration. Would he concede that something could matter to Cog?
> The question, presumably, is how seriously to weigh the import of the quite
> deliberate decision by Cog's creators to make Cog as much as possible
> responsible for its own welfare. Cog will be equipped with some "innate" but
> not at all arbitrary preferences, and hence provided of necessity with the
> concomitant capacity to be "bothered" by the thwarting of those preferences,
> and "pleased" by the furthering of the ends it was innately designed to seek.
> Some may want to retort: "This is not real pleasure or pain, but merely a
> simulacrum." Perhaps, but on what grounds will they defend this claim?
> Cog may be said to have quite crude, simplistic, one-dimensional pleasure
> and pain, cartoon pleasure and pain if you like,
> but then the same might also be said of
> the pleasure and pain of simpler organisms--clams or houseflies, for instance.
> Most, if not all, of the burden of proof is shifted by Cog, in my estimation.
> The reasons for saying that something does matter to Cog are not arbitrary;
> they are exactly parallel to the reasons we give for saying that things matter
> to us and to other creatures.

> Edwards:
> Things matter to us, principally, because it is beneficial to our welfare,
> such as eating, sleeping, friends, money, etc. There are things that matter
> to Cog as well, because Cog has been told to protect itself, such as not
> damaging itself.

Exactly. We need to define what it's purpose in "life" is!

> DENNETT:
> Finally, J.R. Lucas has raised the claim that if a robot were really
> conscious, we would have to be prepared to believe it about its own internal
> states. I would like to close by pointing out that this is a rather likely
> reality in the case of Cog. Although equipped with an optimal suite of
> monitoring devices that will reveal the details of its inner workings to the
> observing team, Cog's own pronouncements could very well come to be a more
> trustworthy and informative source of information on what was really going on
> inside it. The information visible on the banks of monitors, or gathered by
> the gigabyte on hard disks, will be at the outset almost as hard to interpret,
> even by Cog's own designers, as the information obtainable by such "third-
> person" methods as MRI and CT scanning in the neurosciences. As the observers
> refine their models, and their understanding of their models, their authority
> as interpreters of the data may grow, but it may also suffer eclipse.
> Especially since Cog will be designed from the outset to redesign itself as
> much as possible, there is a high probability that the designers will simply
> lose the standard hegemony of the artificer ("I made it, so I know what it is
> supposed to do, and what it is doing now!"). Into this epistemological vacuum
> Cog may very well thrust itself. In fact, I would gladly defend the
> conditional prediction: if Cog develops to the point where it can conduct what
> appear to be robust and well-controlled conversations in something like a
> natural language, it will certainly be in a position to rival its own monitors
> (and the theorists who interpret them) as a source of knowledge about what it
> is doing and feeling, and why.

> Edwards:
> The obvious problem may arise from just believing Cog's own words. He
> may learn to lie, probably from us, just like a child. Unfortunately, we will
> have little choice but to believe him, as he gets more and more complex.

For a T3 robot to pass, it surely must not lie. If it could Terminator
could become a reality!!!!

Blakemore, Philip <pjb397@ecs.soton.ac.uk>



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:28 GMT