From: Godfrey Steve (email@example.com)
Date: Sun Mar 04 2001 - 23:44:27 GMT
Turing: Computing Machinery and Intelligence
The paper 'computing machinery and intelligence' by A.M. Turing
published in 1950 discusses the question 'Can machines think?'.
Turing discusses a way in which a machine could be tested to
determine whether it is thinking or not. This test was so
important that there is now an annual competition started in
1991 by Dr. Hugh Loebner, in which a $100,000 prize is offered
to the author of the first computer program to pass an
unrestricted Turing test.
>I propose to consider the question, "Can machines think?"
This question, as recognised by Turing is ambiguous. The words
'machines' and 'think' are both poorly defined, and after much
debate I have discovered, very hard to define. As humans we do
not as yet know what it is to think, how thinking works and what
else apart from ourselves thinks, therefore we cannot clearly define
what thinking is. The word machine has no clear boundaries
between what is a machine and what isn't. If man were able to build
an exact copy of a human, would it be a machine?, if toasters grew
on trees would they no longer be machines? And so on. Turing
suggests a new form of the problem.
>The new form of the problem can be described in terms of a game
>which we call the 'imitation game." It is played with three people,
>a man (A), a woman (B), and an interrogator (C) who may be of
>either sex. The interrogator stays in a room apart front the other
>two. The object of the game for the interrogator is to determine
>which of the other two is the man and which is the woman. He
>knows them by labels X and Y, and at the end of the game he says
>either "X is A and Y is B" or "X is B and Y is A." The
>interrogator is allowed to put questions to A and B.
The idea of this game is for the interrogator to use the differences
between men and women to work out which is which. If the two
were the same sex, then the interrogator would have no differences
to base his decision on, and therefore not be able to tell the
>We now ask the question, "What will happen when a machine
>takes the part of A in this game?" Will the interrogator decide
>wrongly as often when the game is played like this as he does
>when the game is played between a man and a woman?
I think that here the point of the game should be changed to
whether or not the interrogator can tell the difference between the
machine and a human. If the interrogator asks questions
specifically to tell the difference between the two, but fails
to do so, then it seems reasonable that one should not be able to be
defined as able to think and not the other. A machine can be
classified as thinking by passing this test, fooling the interrogator
into thinking it is human.
What would be the outcome if one person is fooled and cannot tell
the difference, but another cannot? Does the machine pass the test
>The new problem has the advantage of drawing a fairly sharp line
>between the physical and the intellectual capacities of a man.
It is important that the only attribute of a human that is being tested
here is thinking. It would be incorrect to let physical differences of
the machine and the human determine whether or not a machine is
judged as being able to think. For this reason it is important that the
interrogator is kept in a separate room from the two competitors,
and that no direct contact is possible. The ideal method for
interaction between the parties would be in a chat room style
>The game may perhaps be criticised on the ground that the odds
>are weighted too heavily against the machine. If the man were to
>try and pretend to be the machine he would clearly make a very
I think that it is unfair to say that for something to be able to think,
it must be able to mimic the behaviour of a human. Surly dolphins
are intelligent, but they would probably be unable to pass the
Turing test, as they are not human.
>May not machines carry out something which ought to be
>described as thinking but which is very different from what a man
This version of the game would rely on the fact that humans think,
which I am sure is true in most cases, and that for a machine to
think, it must be able to think at the same level as a human and be
able to mimic human mental behaviour. A machine may still be
thinking, even if it is at a lower level than human thought. Maybe it
is not even at a lower level, humans may just be able to perform the
task of thinking at a faster rate than computers, or may have
something that allow our brains to be able to link chains of thought
together and which enables us to learn. If a machine did not have
this it would not make it unable to make decisions, as the chaining
would be another attribute of human capability as well as thinking.
We like to think of ourselves as the highest form of thinkers, which
means that there could be forms of thinking that are lower than our
own. If thinking is simply about making a decision about
something based upon previous experience or mistakes, then
machines can already do this. By passing the Turing Test, a
machine has proved that it can think as it is indistinguishable from
something that we know can think. But if a machine does not pass
the Turing test, it could still be thinking at a lower level. Because
of this I think that the test will identify machines that think at a
high level, but will miss machines that do not. But as we do not yet
know much about thinking this is a good starting point.
>It might be urged that when playing the "imitation game" the best
>strategy for the machine may possibly be something other than
>imitation of the behaviour of a man
Should the machine simply be trying to imitate the behaviour of a
human? If we approach the problem from this angle, might we
simply be forward engineering to a point at which we will be able
to fool ourselves, which may be falling shot of the target of a true
thinking machine. I think a better approach to the problem would
be to design a machine that thinks and passes the Turing test as a
consequence, rather than trying to develop a machine purely to pass
>It is natural that we should wish to permit every kind of
>engineering technique to be used in our machines.
>We also wish to allow the possibility than an engineer or team of
>engineers may construct a machine which works, but whose
>manner of operation cannot be satisfactorily described by its
>constructors because they have applied a method which is largely
>We wish to exclude from the machines men born in the usual
Turing has tried to define what he means when he refers to a
machine in the question. The first two conditions are straight
forward, but the third has confused me slightly. I will try to explain
what I think Turing means. If a human were to be grown in a
laboratory environment, by taking a DNA sample and simply
growing it, the resultant human would be man made, but they
would not be eligible for the test, as humans did not design it.
>we only permit digital computers to take part in our game.
What would happen if a new form of computer were to be designed
in the future that was not digital. Maybe this restriction is a bit too
>we are not asking whether all digital computers would do well in
>the game nor whether the computers at present available would do
>well, but whether there are imaginable computers which would do
This question can never be proved to be false, as technology is
constantly improving and better computers being designed and
built. There will always be better machines that can be imagined in
Turing proceeds to describe the principals with which a digital
computer can be constructed, and shows how a digital computer
can mimic the actions of a human computer very closely.
>This special property of digital computers, that they can mimic
>any discrete-state machine, is described by saying that they are
This means that a digital computer can now perform all computing
processes, with a suitable program in each case, all digital
computers are equivalent. The fact that digital computers can
mimic any discrete state machine means that, if it were the case
that the human brain is in fact a discrete state machine, then a
digital computer would be able to model it, and therefore be able to
think. We do not know if the brain is a discrete-state machine yet,
but this could be proved in the future and provide the answer to our
Turing now presents some of the objections to the question. Turing
invalidates the religious objection, by example of how theological
arguments contradict our present day knowledge.
>There are a number of results of mathematical logic which can be
>used to show that there are limitations to the powers of discrete-
Godels's Theorem (1931) shows that 'in any sufficiently powerful
logical system statements can be formulated which can neither be
proved nor disproved within the system, unless possibly the system
itself is inconsistent'. This is a problem, as the digital computer in
the test will be using a logical system to imitate the behaviour of a
human. If there are inconsistencies in the system, then the system
my not produce the correct answer, if any answer at all, thus
immediately revealing that it is a machine to the interrogator.
>TURING (reflecting on Professor Jefferson's Lister Oration, 1949
>the only way by which one could be sure that machine thinks is to
>be the machine and to feel oneself thinking.
>he would be quite willing to accept the imitation game as a test
I think that it would be possible to be able to discover if a machine
was thinking or not, with out the need to become the machine. A
machine could be presented with a non-specific problem of any
difficulty. If the machine were then to apply the best method it
knows to solve the problem, could it be regarded as thinking? If it
makes a mistake, but when attempting a problem of the same
type adapts and gets it right, could that be regarded as thinking. In
my opinion, it depends upon the definition of thinking, as we
cannot say something is thinking if we do not know exactly what
thinking is. If something appears to be thinking, then due our lack
of knowledge about thinking at the moment, is it not reasonable to
say it is thinking.
>TURING: presenting a possible sceptics question
>I grant you that you can make machines do all the things you have
>mentioned but you will never be able to make one to do X
Turing talks about a machines inability to experience some of the
things we take for granted, such as enjoying strawberries and
cream, and how this leads to failures in other area of the machine,
such as social skills. The machine would not be able to discuss the
taste of strawberries and cream, as it would have never tasted them,
and so on. Maybe in the future, a particular kind of sensor will be
developed that detects taste. If it were fitted to a machine, then
maybe it could have the experience. I think that this problem is a
problem with technology at the moment. Once a machine had been
fitted with a particular kind of sensor, the question, can and how
a machine can like something. I think that when we are born we
have certain likes and dislikes built in, like certain tastes and
noises. As we learn and develop our tastes change. Could a
machine not be programmed with a very simple 'like' profile when
created, then develop its profile further based on other experiences
or knowledge learnt.
>The claim that a machine cannot be the subject of its own thought
>can of course only be answered if it can be shown that the
>machine has some thought with some subject matter
If we are to ask the question 'can a machine be the subject of its
own thoughts' do we not first have to accept that machines can
think? This would then answer the original question.
>with a variation of Lady Lovelace's objection
>a machine can "never do anything really new."
I do not agree with this objection. It depends upon the definition of
'new'. I think that we as humans build on what we already know
and have discovered, to learn new things and make new
discoveries. This I feel is a combination of induction, and informed
decisions. E.g. Take what we already know, make a prediction
based upon previous experience and knowledge, to discover
Machines could be programmed with certain problem solving
techniques, when presented with a problem, may return a totally
new answer. This could be due to the problem never been
attempted before, or because of a unique way the machine has
applied the rules. Although the methods being applied are not new
the result is new, and has been generated by the computer.
>The nervous system is certainly not a discrete-state machine
>may be argued that, this being so, one cannot expect to be able to
>mimic the behaviour of the nervous system with a discrete-state
This could be a problem if we wish to reverse engineer the bran
into a digital computer, as a digital computer is a discrete-state
machine. But I still think that it would be possible. In an ideal
situation the discrete steps could be made infinitely small,
matching the nervous system, this is not possible as it would then
not be a discrete machine. Instead the steps may be made small
enough, that the error incurred may become negligible.
>It is not possible to produce a set of rules purporting to describe
>what a man should do in every conceivable set of circumstances.
I do not think that it is necessary to have a complete set of rules to
start with. A newborn baby does not know what to do when it is
confronted with a red traffic light, it learns this through being
taught, and its experiences interacting with the real world.
Some rules would have to be hard coded into the machine to start
with, but these would only have to be the instincts of a new baby.
E.g. not liking pain or punishment, so that it would be able to lean
not to do some things, and have a desire to learn. From here a
machine could learn all the other rules. In life, every human does
not behave in the same way or know the same things.
>An idea presented to such a mind will on average give rise to less
>than one idea in reply.
Turing is saying that only a small proportion of ideas presented to a
mind are supercritical. A supercritical idea is one, which results in
more ideas being generated by the mind. I think that all ideas
presented to the mind will generate more ideas, as this is part of
learning. When I am taught a new idea, I think about it to see if I
agree with it, and to understand it. I think that learning could be
making up your own ideas about how things work, based upon
what you are taught and previous experience.
>The "skin-of-an-onion" analogy is also helpful. In considering the
>functions of the mind or the brain we find certain operations
>which we can explain in purely mechanical terms. But then in
>what remains we find a further skin to be stripped off, and so on.
I think that this analogy is wrong. I think that the brain is more like
a network of different components all working together to produce
what we call intelligence. The skins that Turing refers to would be
nodes on my network. In Turing's example he is saying that at the
core of the onion the real mind may be found. I do not think that
one single part of the brain is the mind. More all parts of the brain
work together to generate what we refer to as intelligence.
>In the process of trying to imitate an adult human mind we are
>bound to think a good deal about the process which has brought it
>to the state that it is in. We may notice three components.
>(a) The initial state of the mind, say at birth, (b) The education to
>which it has been subjected, (c) Other experience, not to be
>described as education, to which it has been subjected.
>Instead of trying to produce a programme to simulate the adult
>mind, why not rather try to produce one which simulates the
I think that this is the correct way to approach the problem. If we
are trying to imitate the adult mind, then why not build a machine
that tries to get to the same state via the same route.
>It will not be possible to apply exactly the same teaching process
>to the machine as to a normal child
This is due to the obvious difference between a machine and a
human, e.g. they will probably not have the same social
environment as a young human. This could affect the rate at which
it learns, or even what it learns. It could never get to the same state
as an adult human, as it could never go through the same life
experiences that an adult human has been through.
>suppose the teacher says to the machine, "Do your homework
>now." This may cause "Teacher says 'Do your homework now' "
>to be included amongst the well-established facts.
Does this mean that the machine is thinking, or is it simply
following the rules in its program. Would the machine be able to
say no and refuse to do its homework even though it knows it
must. Children do have this option, they do not have to follow
orders by authoritative figures.
>Intelligent behaviour presumably consists in a departure from the
>completely disciplined behaviour involved in computation
I do not think that Intelligent behaviour is disciplined behaviour, as
we have a choice in what we do, depending upon how we feel. But
I do think that what we feel is based upon previous experience,
which is similar to being based upon rules learnt in life.
>We may hope that machines will eventually compete with men in
>all purely intellectual fields. But which are the best ones to start
>with? Even this is a difficult decision. Many people think that a
>very abstract activity, like the playing of chess, would be best.
As we all know this has already been done. A machine (IBM's
Deep Blue) has beaten the best human player in the world. But was
the machine really thinking. It was simply looking ahead a certain
distance at all of the possible moves it could have made, and using
a minimax search, found the best move it could make. This is a
simple algorithm based on simple arithmetic, performed by what in
essence is an overgrown calculator. Does this really constitute?
thinking. I think that maybe it does. It basses a decision on all of
the knowledge available to it at the time. The fact that it is not
random means there was a reason for making that choice, that
reason could be the fact that the machine thought about it.
>It can also be maintained that it is best to provide the machine
>with the best sense organs that money can buy, and then teach it
>to understand and speak English. This process could follow the
>normal teaching of a child. Things would be pointed out and
This is because of the symbol-grounding problem. A machine
needs to be able to link some symbols in its vocabulary to objects
in the real world. With out this it maybe performing meaningless
symbol manipulation. If some of the symbols can have meaning,
then other symbols can be defined in terms of these and the
machine may be able to learn.
I think that there is a difference between thinking and intelligence.
The two are related, but it is very easy to get confused between
them. Intelligence is a higher form than thinking. A machine
does not have to be intelligent to be able to think, but it has to be
able to think to be intelligent. Thinking is making informed choices
based upon information available. I cannot say what intelligence is,
but I am sure it is more than this.
This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:19 BST