Re: Turing: Computing Machinery and Intelligence

From: HARNAD Stevan (harnad@coglit.ecs.soton.ac.uk)
Date: Tue Mar 20 2001 - 22:49:26 GMT


On Wed, 28 Feb 2001, Bon Mo wrote:

> Turing, A. M. (1950) Computing Machinery and Intelligence.
> Mind 49:433-460.
> http://cogprints.soton.ac.uk/documents/disk0/00/00/04/99/index.html
 
> > TURING:
> > If the meaning of the words "machine" and "think" are
> > to be found by examining how they are commonly used it
> > is difficult to escape the conclusion that the meaning
> > and the answer to the question, "Can machines think?"
> > is to be sought in a statistical survey such as a
> > Gallup poll.

Turing is saying this ironically, but in the end, the way he formulates
his Turing Test [TT] is as a kind of Opinion Poll! This is not what it is,
and not how it should be construed. The TT is not passed by fooling
N people X% of the time. It is passed by really designing a system that
has the full ability of to perform as a pen-pal for a lifetime,
indistinguishably (to anyone) from a real human pen-pal.

http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad92.turing.html

> > TURING:
> > In order that tones of voice may not help the
> > interrogator the answers should be written, or better
> > still, typewritten. The ideal arrangement is to have
> > a teleprinter communicating between the two rooms.

Turing obviously meant to exclude irrelevant biassing information not
only from the sound of the candidate, but also its appearance. But are
all aspects of appearance merely irrelevant, biassing information?
Doesn't being able to DO anything a real person can do include a lot of
things that can only be detected visually and auditorily (e.g., which
things the person calls "apples")?

> Mo:
> In order to keep the contest as even as possible, any
> physical aspects of the participants are hidden from
> the interrogator. The idea in this example is to set up
> text interaction only, possibly as modern day e-mail
> interaction. It is the replies from the questions, that
> the interrogator must make their decision from, not from
> the way a contestant looks or talks.

Let us say it is from what the candidate can DO rather than from what
the candidate looks/sounds like. But aren't there many things we can do
beyond emailing? And is there any way to test and detect them other than
by looking/listening?

> > TURING:
> > we should feel there was little point in trying to make
> > a "thinking machine" more human by dressing it up in
> > such artificial flesh.

Skin (and skin colour) are obviously irrelevant. But is having a body
that can interact with the world irrelevant too?

> Mo:
> [if]it did fool you for a lifetime that it was human

The question is not whether it is human but whether it has a mind. If
it could interact indistinguishably from a human for a lifetime, would
you be FOOLED if you assumed it had a mind?

> Mo:
> some mentally retarded humans do not seem
> capable of looking after themselves, let alone type out a
> coherent message. They may not even be believed as being
> human by the interrogator. So if these two participants were
> compared who would fall short of being classed as a human?

The Turing Test is of candidates we have designed, not of retarded
humans we have not designed. Nor would it be passing the TT to design a
candidate that we could not tell apart from a retarded human that we
were not even sure had a mind! The TT is not a trick, or a way of
fooling. It is a test of full human-scale performance capacity. (There
is no point in trying to model planes by modeling damaged planes that
can't fly.)

> Mo:
> Turing must have foreseen that his game was only limited
> to a symbolic Q&A style reasoning. This was argued by Searle,
> with his Chinese-Chinese dictionary example, that with only
> symbols, a question in Chinese (if you did not already know
> Chinese) could not be understood, unless you search for the
> definition, but that too would be in Chinese. So you could
> continually regress for the definition, and never finding a
> understandable meaning (from your viewpoint, not from someone
> that understands Chinese).

Here you have mixed up Searle's Chinese Room Argument (which shows that
a person who doesn't understand Chinese could implement a Chinese
TT-pen-pal programme without being able to understand Chinese) and the
the Symbol Grounding Problem (one illustration of which is the
impossibility of finding out what a Chinese word means from looking it
up in a Chinese/Chinese dictionary if you do not know the meaning of any
words at all in Chinese).

> Mo:
> Searle deemed that symbols needed
> to be grounded for them to carry through any meaning.

No, Searle did not say a word about symbol grounding. He thought only
brain processes could generate meaning.

> Mo:
> Another limitation with the Q&A is that, anything that is
> enclosed with the question that requires robotic functionality,
> such as describing a picture, or smelling an object, would
> not be possible unless sensorimotor-capabilities are provided.

Correct. But some think the power and scope of language is so great that
it indirectly tests all of our capabilities: The task need merely be
described in words: "What would you call it if you saw a big coloured
arc in the air after it had rained? Describe the colours...."

> Mo:
> one way an individual tackles a problem can be different
> to another, even if they were educated in the same way. It
> would not be possible to program each different variation
> that can be encountered.

The candidate need not be EVERY person, just one individual; after all,
that's all any of us is. It's not the differences between us that need
to be captured, but what we all have in common, as thinking beings.

> Mo:
> The machine must be required to continually learn,
> new facts and rules, and to change the relationships between
> the data it stores to accompany the changes. After all,
> that is what humans do continuously.

Correct.

> > TURING:
> > We also wish to allow the possibility than an engineer
> > or team of engineers may construct a machine which works,
> > but whose manner of operation cannot be satisfactorily
> > described by its constructors because they have applied a
> > method which is largely experimental. Finally, we wish to
> > exclude from the machines men born in the usual manner.

Turing is simply saying here that the only point of the TT is if you
know HOW the candidate works. We learn nothing if we simply use a real
human, or a clone, or a robot that grew from a tree, or something we
designed while sleep-walking as our candidate. This is
reverse-engineering, after all, not simply people-collecting.

> Mo:
> If the engineers do not know the
> mechanism even though they built it, then they have not
> achieved any gain as they do not know how it will work.
> I also take it that the exclusion means that the machines
> are to be man-made (non-natural) and that we know their
> mechanisms.

Correct.

> > TURING:
> > It may also be said that this identification of machines
> > with digital computers, like our criterion for "thinking"
> > will only be unsatisfactory if (contrary to my belief),
> > it turns out that digital computers are unable to give a
> > good showing in the game.
>
> Mo:
> Turing now uses a digital computer as the machine to
> satisfy the conditions of the game.

Why? Why is it not enough that it be a machine we built, and that we
understand how it works? Why does it have to be a digital computer?

> Mo:
> There must also be a way of measuring the quality of the
> answers given by the computer and the human. One way is
> for the interrogator to agree on an answer, with a certain
> degree of error thrown in. This is obviously subject to
> an individual opinion, so it is hard to write down explicitly
> the criteria to 'pass' as question. The interrogator can
> then add up all the resultant 'passes' and compare which
> of A or B has the highest number, and choose that as the
> candidate that gives 'the closest best answers considering
> the questions'. Of course that candidate could be either
> the computer or the human.

All of this is arbitrary and unnecessary. All that is required is that
the performance capacity of the candidate be indistinguishable from that
of a real person (for a life time). No right/wrong or scoring needed...

> > TURING:
> > I believe that in about fifty years' time it will be
> > possible, to programme computers, with a storage capacity
> > of about 109, to make them play the imitation game so well
> > that an average interrogator will not have more than 70
> > per cent chance of making the right identification after
> > five minutes of questioning.
>
> Mo:
> Fifty years on the power of the computer has grown
> exponentially over the years. With Terabytes of storage
> capacity, and 1000MHz+ clock speeds capable of carrying out
> 1 billion instructions a second. The average computer
> is still short of matching human capabilities in generating
> "good" answers. The criteria of the answers has changed.
> The loebner prize is an award given to a computer system
> capable of fooling a panel of 3 judges for 45 minutes.
> So far no system has come close to this, what is required
> though is Turing indistinguishability for a lifetime. A
> system is no use if it cannot last that length of time.

Almost all correct. But what you should say is that there may be many
ways to fool some people for a short time, but that's not what we want.
We want the real, full-scale ability.

> Mo:
> At present there are individual
> algorithms that can be as intelligent if not more so than
> humans, such as chess playing and arithmetic.

Be careful. They may be good at performing the task, but whether or not
they are intelligent (thinking, understanding, etc.) concerns whether or
not they have a mind, and that is what algorithms (computation) are on
trial for here.

> Mo:
> The argument from consciousness is that machines
> cannot have feelings. This of course can be proven wrong
> by giving machines sensorimotor-capabilities. Taking
> in analog world data and changing it into grounded
> symbolic data.

How does sensorimotor capacity prove that a system is feeling?

> > TURING:
> > A variant of Lady Lovelace's objection states that a
> > machine can "never do anything really new".

Granny Objections #1 and #2:

http://www.cogsci.soton.ac.uk/~harnad/CM302/Granny/sld003.htm

> > TURING:
> > The argument from informality of behaviour. It is
> > not possible to produce a set of rules purporting
> > to describe what a man should do in every
> > conceivable set of circumstances.
>
> Mo:
> This is recognised as a knowledge representation
> problem, You can pre-program all the facts and rules
> into a system, but as soon as the algorithm is
> asked to perform an instruction, that it has no rules
> for. Then the system may crash alarmingly. The idea
> is that it is not possible to program in all the facts
> and rules, eventually there will always be something
> the programmer did not think of. The system needs
> some way of gathering explicit rules that it deems
> useful, and to implement them into its algorithm.

I'm not sure Turing meant the Frame Problem here. He probably meant
individual differences and individual uniqueness.

> > TURING:
> > We normally associate punishments and rewards with the
> > teaching process. Some simple child machines can be
> > constructed or programmed on this sort of principle.
>
> Mo:
> This is part of the credit/blame assignment problem, if the
> correct result is given then credit the inputs leading to
> the result, and if an incorrect result occurs then blame
> the inputs. The problem is how to assign the credit and
> blame. This process requires a teacher that tells the child
> machine when it gives a correct/incorrect response.

Correct.

Stevan Harnad



This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:25 BST