Re: Sony Turing Test

From: Clark Graham (ggc198@ecs.soton.ac.uk)
Date: Tue Jun 05 2001 - 12:43:02 BST


The point of the Turing Test is to see whether a candidate is
indistinguishable from a human (if it is, then it is reasonable to
assume that the candidate has a mind). This is all (!) the Turing Test
can show.

If the robot is indeed wired up to a person, then it seems the only
way to find out would be to follow any wires or signals coming out of
the robot and see where they end up. If they lead to a human, the
whole (robot) exercise would be completely pointless - nothing about
the robot would be impressive, as it would all be controlled by a
human. Although I have heard nothing else about the robot, it seems
unlikely that an intelligent person could be persuaded to act as a
robot all day long, answering a barrage of questions and speaking
multiple languages.

Assuming the robot is an individual entity, the only way to decide
whether it would pass the Turing Test is to talk to it as you would a
"normal" human. One part of the Turing Test states that the candidate
and the "tester" must be in separate rooms, to avoid any visual bias -
it would be hard to say that a candidate was human if it was clearly a
robot - this would obviously add an amount of bias to the decision.

The other major point about the Turing Test is that a candidate must
be able to hold a lifetime's indistinguishability, i.e. indefinitely.
Without this stipulation, situations such as the Loebner prize come
about, whereby a candidate that can fool judges for ten minutes is
said to have passed the Turing Test. If on the eleventh minute it does
something so ridiculous that there is left no doubt as to its
mindlessness, it clearly has passed no such test and is nothing more
than a grand failure.

This brings about a point which has lead to many misinterpretations of
Turing's original paper (Computing Machinery and Intelligence) - the
issue of "fooling". A successful Turing Test candidate will not be
FOOLING anyone that it has a mind - it will actually HAVE one.
Something that fools us into believing a falsehood is of no interest
to us - one hope of a successful candidate is that it may teach us
about the way our own minds work. Clearly this isn't going to happen
if it doesn't really have a mind in the first place.

Back to the issue of deciding whether the New York robot would pass
the Turing Test - there are two major stumbling blocks for would-be
candidates - the Frame Problem and the Symbol Grounding Problem, which
are inter-related.

The Frame Problem is the situation that arises when a candidate is
asked questions about things outside its frame of knowledge. The
example always given is a program that models everything (to an
arbitrary level of detail) about a person using a telephone in a room.
The program can answer questions about how the person operates the
phone, what he says into it, etc etc. Up to this point, it seems that
the program is intelligent - it can infer answers from details about
the situation; it seems to understand. However, when the conversation
is over and the person has left the room, the program can be asked a
question like "What happens to the phone now?", and will respond with
either indecipherable nonsense or something like "The phone ceases to
exist". When a reply like this is given, it is clear that the program
actually understands NOTHING - all answers about the phone situation
were merely fooling us that it did. Extra knowledge can be programmed
into the robot so that this specific problem will not arise again, but
the Frame Problem will then simply occur "further down the line" (cf
Goedel's Theorem).

The Symbol Grounding Problem is best illustrated by the "Chinese-
Chinese dictionary-go-round" example. A person with no knowledge of
Chinese cannot learn the meaning of a Chinese word by looking it up in
a Chinese-Chinese dictionary. It will simply be defined in terms of
meaningless (to the reader) symbols; if one of these is looked up,
more meaningless symbols will define the new word. If the person keeps
looking up the defining symbols, they will just go round and round the
dictionary (or at least a subset of it), learning nothing of the
Chinese language in the process. To gain any meaning from the symbols,
a subset of them need to be "grounded" in some way - a Chinese person
could point to one symbol and then to a (real-life) cat to indicate
that that particular symbol meant "cat". If the orginial reader
already knew what a cat was, they could assign some meaning to that
particular symbol. If enough Chinese symbols / words were grounded in
such a way, then symbols defined in terms of grounded symbols would be
able to have meaning assigned to them in the mind of the reader - the
reader could learn the language. Only when a Turing Test candidate has
a set of grounded symbols can it really understand anything, and this
will help to overcome the Frame Problem. Turing's original test just
showed a candidate in a "pen-pall" relationship with a tester (today
such a test could be conducted via e-mails), but it has been proposed
that such a system could not overcome the Symbol Grounding Problem -
it would have to have non-computational devices such as transducers
and cameras.

As the New York robot is a robot and not just a static computational
device, it may well have the capability to understand its environment
and the symbols it is operating upon.

I would think the only way to do a short-term test upon it is to
attack the Frame and Symbol Grounding problems - ask it questions
(simple ones, to which every person could give a meaningful answer to)
that could possibly fall outside its frame of knowledge, such as what
happens to a passer-by when they disappear round a corner. I have
looked at several websites in the past that list some questions to ask
a Turing Test candidate, and these are primarily of the "Is there a
god?" variety. A Turing Test-passing system is not supposed to be an
oracle, just indistinguishable from humans (mainly mind-wise).
Therefore, the only way to test its indistinguishability is to ask it
questions that would confuse no-one, and that every person could give
an answer to that showed they understood something about what was
going on.

I hope this helps you somehow,

Graham Clark
___________________
ggc198@soton.ac.uk



This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:31 BST