Re: Sony Turing test

From: Clark Graham (ggc198@ecs.soton.ac.uk)
Date: Sat Jun 16 2001 - 13:00:24 BST


---------- Forwarded message ----------
Date: Sat, 09 Jun 2001 22:27:19 -0400
From: Cliff Landesman <cland@netbox.com>
To: Graham <ggc198@ecs.soton.ac.uk>
Subject: Re: the Sony Turing test case (fwd)

Ok, with your guidance, it was an easy call. It's a human. After I
convinced myself of the truth with a short conversation, I started talking
to the guard. She smiled and suggested I visit the "Sony Wonder Lab". There
I met the Wizard of Oz who controls the robot. Actually, the actor changes
every hour because the job is so exhausting. He doesn't speak many
languages, but just says hello in a few and can speak a little Spanish.

Regarding my claim that as robots get better, the Sony Test gets easier, I
only claim this is true up to a point. You have to admit that it would be
very hard to distinguish the Eliza program and a human imitating Eliza. As
robots get more sophisticated, they will still fall short of human
intelligence, at least for a while. A good actor will have to know the
nuances of progress in AI, what a robot can and can't do. Consistency will
become a serious issue, as in any attempt to elaborate and defend a lie. I
agree that at some point, when robots get close to passing the Turing Test,
the Sony Test gets hard again.

Thank you for your help. It is always a pleasure to discover the truth.

Cliff

At 09:47 AM 6/9/01 +0100, you wrote:
>Real-To: Graham <ggc198@ecs.soton.ac.uk>
>Real-Cc: harnad@coglit.ecs.soton.ac.uk
>
>Hi,
>
>I agree with you that it is entirely probable that the robot could be
>controlled by a human (I was going to change what I said about it in
>my original message, but forgot). However, I am still a bit unsure
>about whether it would be possible to tell if it was a human,
>especially while holding the view that it is possible for a robot to
>pass the Turing Test. The robot (whether it is real or a character of
>an actor) is behaving very similar to a human. It is easy for one
>human to imitate another (an arbirtrary person, not someone like a
>celebrity) - they just have to act normally. If it were an actor, and
>he was trained about the Frame and Grounding problems, you might be
>able to catch him out by continually testing the "robot's" frame of
>knowledge, but a good enough actor who knew evrything about his
>character may still be able to get by these.
>
>Although in the future it will be easier for robots to pass the Turing
>Test (if it is actually possible in the first place), but I think it
>will also be trivial for humans to pass the Sony test. If a robot
>exactly implemented the human mind, then a human imitating it would
>only have to imitate a human. In fact, at the top of the Turing
>heirarchy (a series of additions to the original Turing Test [T2],
>where this robot would be T3) at the T5 level, the candidate also has
>to look like a human, both outside and in. A human imitating a T5
>passer would just have to stand there and talk or perform actions.
>
>I would be very interested to hear how you get on.
>
>thanks,
>
>Graham.
>
>On Tue, 5 Jun 2001, Cliff Landesman wrote:
>
> > Thank you for the reply. I will follow your suggestions and--if you
> > like--let you know the results.
> >
> > I just want to point out one thing. If there is a person behind the robot,
> > and it knows about the Framing and Grounding problems, then it will try to
> > imitate a robot in this respect. Improbable? Consider the fact that paying
> > an AI expert to train an actor costs less than the 3.5 millions dollars bb
> > Wonderbot says Sony spent on bb's development. If you simply wanted to
> > entertain visitors to the Sony Wonder Lab, and were a profit maximizing
> > corporation, which would you do?
> >
> > This is not really a Turing Test because the human, if there is one, is
> not
> > acting honestly. What makes the Sony case interesting is that the robot
> > appears much smarter than the Eliza program. This ambition in the
> > performance, be it the performance of a human or robot, gives me hope I
> > might be able to make a discovery. If the robot is controlled by a human,
> > the actor has chosen to imitate a robot complex enough that the
> performance
> > may be difficult to sustain with perfect consistency. Low grade robots are
> > easy to imitate, and harder to distinguish from actors playing robots.
> > Oddly enough, at one end of the spectrum, the *better* the robot
> > performance, the *easier* it is to distinguish from a robot actor. As the
> > frontiers of AI advance, it will be easier for robots to pass the Turing
> > Test and harder for humans to pass the Sony Test.
> >
> > Cliff
> >
> > At 12:43 PM 6/5/01 +0100, you wrote:
> > >Real-To: Graham <ggc198@ecs.soton.ac.uk>
> > >Real-Cc: harnad@coglit.ecs.soton.ac.uk
> > >
> > >
> > >Hi,
> > >
> > >I received your message via Stevan Harnad, who asked students on his
> > >course to reply to it.
> > >
> > >The point of the Turing Test is to see whether a candidate is
> > >indistinguishable from a human (if it is, then it is reasonable to
> > >assume that the candidate has a mind). This is all (!) the Turing Test
> > >can show.
> > >
> > >If the robot is indeed wired up to a person, then it seems the only
> > >way to find out would be to follow any wires or signals coming out of
> > >the robot and see where they end up. If they lead to a human, the
> > >whole (robot) exercise would be completely pointless - nothing about
> > >the robot would be impressive, as it would all be controlled by a
> > >human. Although I have heard nothing else about the robot, it seems
> > >unlikely that an intelligent person could be persuaded to act as a
> > >robot all day long, answering a barrage of questions and speaking
> > >multiple languages.
> > >
> > >Assuming the robot is an individual entity, the only way to decide
> > >whether it would pass the Turing Test is to talk to it as you would a
> > >"normal" human. One part of the Turing Test states that the candidate
> > >and the "tester" must be in separate rooms, to avoid any visual bias -
> > >it would be hard to say that a candidate was human if it was clearly a
> > >robot - this would obviously add an amount of bias to the decision.
> > >
> > >The other major point about the Turing Test is that a candidate must
> > >be able to hold a lifetime's indistinguishability, i.e. indefinitely.
> > >Without this stipulation, situations such as the Loebner prize come
> > >about, whereby a candidate that can fool judges for ten minutes is
> > >said to have passed the Turing Test. If on the eleventh minute it does
> > >something so ridiculous that there is left no doubt as to its
> > >mindlessness, it clearly has passed no such test and is nothing more
> > >than a grand failure.
> > >
> > >This brings about a point which has lead to many misinterpretations of
> > >Turing's original paper (Computing Machinery and Intelligence) - the
> > >issue of "fooling". A successful Turing Test candidate will not be
> > >FOOLING anyone that it has a mind - it will actually HAVE one.
> > >Something that fools us into believing a falsehood is of no interest
> > >to us - one hope of a successful candidate is that it may teach us
> > >about the way our own minds work. Clearly this isn't going to happen
> > >if it doesn't really have a mind in the first place.
> > >
> > >Back to the issue of deciding whether the New York robot would pass
> > >the Turing Test - there are two major stumbling blocks for would-be
> > >candidates - the Frame Problem and the Symbol Grounding Problem, which
> > >are inter-related.
> > >
> > >The Frame Problem is the situation that arises when a candidate is
> > >asked questions about things outside its frame of knowledge. The
> > >example always given is a program that models everything (to an
> > >arbitrary level of detail) about a person using a telephone in a room.
> > >The program can answer questions about how the person operates the
> > >phone, what he says into it, etc etc. Up to this point, it seems that
> > >the program is intelligent - it can infer answers from details about
> > >the situation; it seems to understand. However, when the conversation
> > >is over and the person has left the room, the program can be asked a
> > >question like "What happens to the phone now?", and will respond with
> > >either indecipherable nonsense or something like "The phone ceases to
> > >exist". When a reply like this is given, it is clear that the program
> > >actually understands NOTHING - all answers about the phone situation
> > >were merely fooling us that it did. Extra knowledge can be programmed
> > >into the robot so that this specific problem will not arise again, but
> > >the Frame Problem will then simply occur "further down the line" (cf
> > >Goedel's Theorem).
> > >
> > >The Symbol Grounding Problem is best illustrated by the "Chinese-
> > >Chinese dictionary-go-round" example. A person with no knowledge of
> > >Chinese cannot learn the meaning of a Chinese word by looking it up in
> > >a Chinese-Chinese dictionary. It will simply be defined in terms of
> > >meaningless (to the reader) symbols; if one of these is looked up,
> > >more meaningless symbols will define the new word. If the person keeps
> > >looking up the defining symbols, they will just go round and round the
> > >dictionary (or at least a subset of it), learning nothing of the
> > >Chinese language in the process. To gain any meaning from the symbols,
> > >a subset of them need to be "grounded" in some way - a Chinese person
> > >could point to one symbol and then to a (real-life) cat to indicate
> > >that that particular symbol meant "cat". If the orginial reader
> > >already knew what a cat was, they could assign some meaning to that
> > >particular symbol. If enough Chinese symbols / words were grounded in
> > >such a way, then symbols defined in terms of grounded symbols would be
> > >able to have meaning assigned to them in the mind of the reader - the
> > >reader could learn the language. Only when a Turing Test candidate has
> > >a set of grounded symbols can it really understand anything, and this
> > >will help to overcome the Frame Problem. Turing's original test just
> > >showed a candidate in a "pen-pall" relationship with a tester (today
> > >such a test could be conducted via e-mails), but it has been proposed
> > >that such a system could not overcome the Symbol Grounding Problem -
> > >it would have to have non-computational devices such as transducers
> > >and cameras.
> > >
> > >As the New York robot is a robot and not just a static computational
> > >device, it may well have the capability to understand its environment
> > >and the symbols it is operating upon.
> > >
> > >I would think the only way to do a short-term test upon it is to
> > >attack the Frame and Symbol Grounding problems - ask it questions
> > >(simple ones, to which every person could give a meaningful answer to)
> > >that could possibly fall outside its frame of knowledge, such as what
> > >happens to a passer-by when they disappear round a corner. I have
> > >looked at several websites in the past that list some questions to ask
> > >a Turing Test candidate, and these are primarily of the "Is there a
> > >god?" variety. A Turing Test-passing system is not supposed to be an
> > >oracle, just indistinguishable from humans (mainly mind-wise).
> > >Therefore, the only way to test its indistinguishability is to ask it
> > >questions that would confuse no-one, and that every person could give
> > >an answer to that showed they understood something about what was
> > >going on.
> > >
> > >I hope this helps you somehow,
> > >
> > >Graham Clark
> > >___________________
> > >ggc198@soton.ac.uk
> > >
> > >
> > > >
> > > > I was wondering if you could help me with an empirical question. There
> > > is a
> > > > robot on display in the Sony building in New York City. I can't
> tell if
> > > the
> > > > robot is reasonably good at imitating a human or if a human is
> reasonably
> > > > good at imitating a robot. Perhaps the robot is wired to a person who
> > > > provides the robot with answers. What questions could I ask the
> robot to
> > > > help me decide?
> > > >
> > > > Here's a little background.
> > > >
> > > > When I asked the robot ("bb Wonderbot") what was the product of two
> > > longish
> > > > numbers, the reply was "I don't know. I'm a robot, not a calculator".
> > > >
> > > > The robot seems to speak multiple languages (so do many humans).
> > > >
> > > > The robot uses a movable video camera (a pair of camera "eyes"?)
> and can
> > > > discuss transient objects in the immediate environment. It
> recognized me
> > > > when I left for a few minutes and then returned.
> > > >
> > > > It would be hard to distinguish a human imitating Eliza and Eliza.
> > > >
> > > > When I spoke grammatical nonsense to the robot, it used my words to
> > > build a
> > > > story that made semantic sense. It did this a little too well for a
> human,
> > > > but I wasn't sure an alert person with a good memory couldn't have
> done
> > > the
> > > > same.
> > > >
> > > > Cliff
> > > >
> > > >
> > >
> > >Graham.
> > >_____________________________
> > >
> > >http://www.ecs.soton.ac.uk/~ggc198
> >
> >
> >



This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:31 BST