Re: Turing Test

From: Worrall, Nicholas (nw297@ecs.soton.ac.uk)
Date: Thu Feb 10 2000 - 12:09:47 GMT


http://cogprints.soton.ac.uk/abs/comp/199807017

Date: Thu, 10 Feb 2000 12:09:47 +0000 (GMT)
From: Worrall, Nicholas <nw297@ecs.soton.ac.uk>

> Shaw:
> In his paper 'Computing Machinery and Intelligence', Turing
> considers the question 'Can machines think?'. In light of the
> ambiguity concerning the words 'Machine' and 'Think' - he proposes
> an alternative 'test' to answer the same question:

> TURING
> The new form of the problem can be described in terms of a game
> which we call the 'imitation game." It is played with three
> people, a man (A), a woman (B), and an interrogator (C) who may be
> of either sex. The interrogator stays in a room apart front the
> other two. The object of the game for the interrogator is to
> determine which of the other two is the man and which is the
> woman. He knows them by labels X and Y, and at the end of the game
> he says either "X is A and Y is B" or "X is B and Y is A."
...
> It is A's object in the game to try and cause C to make the
> wrong identification.
...
> We now ask the question, "What will happen when a machine
> takes the part of A in this game?" Will the interrogator decide
> wrongly as often when the game is played like this as he does when
> the game is played between a man and a woman? These questions
> replace our original, "Can machines think?"

> Shaw:
> One question that could be asked about this test is whether it is
> possible for a machine to deceive the interrogator by applying a
> (comprehensive) set of rules. Perhaps it could be argued that,
> over a long period of time, the machine would require the ability
> to 'think' in the same way as the interrogator in order to
> maintain the deception. Surely the outcome will also depend on the
> ability of the interrogator to ask appropriate questions, and on
> his or her preconceptions of how machines behave.

Given the idea that the test can be simulated as a machine requiring the
ability to 'think' in the same way as the interegator, surely it is
simply possible to say that it can 'emulate' the way that the
interrogator asks questions to decieve the other entitys. Given the
questions of the interrogator the entitys must respond with similar
characteristics such as time delay and possible confusion to allow for
the ambiguiety of thought.

> Shaw:
> With regard to the definition of the term 'Machine' in the test,
> Turing says:

> TURING
> We also wish to allow the possibility than an engineer or team of
> engineers may construct a machine which works, but whose manner of
> operation cannot be satisfactorily described by its constructors
> because they have applied a method which is largely experiment

> Shaw:
> This could be important, because it removes the requirement that
> the designer of the system should understand its working. Assuming
> that it is possible to construct a 'thinking' machine and
> establish that it can 'think' (the original problem), the engineer
> would not need to understand the thought process itself. For
> example, if a neural network of sufficient complexity could be
> constructed and trained so as to pass the Turing test, the
> designer would almost certainly be unable to explain its operation
> at a low level.

While the term 'experimental' may be valid towards the idea of machine
learning and is most likely meant in this case, we must still consider the
idea that the 'experiment' may just be a new architecture. Turing suggests
that the constructors ( Which is a undefined group) may know parts of the
system individually but not know the entire system. His words also refer
that the operation ( Or rule base) cannnot be understood as a collective
entitiy, which in this term is most likely meaning unsupervised machine
learning?

> Shaw:
> Later in the paper, Turing asserts that a digital computer can
> produce the same effects as any 'discrete state machine', in which
> only two states of any element are considered, nothing in-between:

> TURING
> This special property of digital computers, that they can mimic
> any discrete-state machine, is described by saying that they are
> universal machines. The existence of machines with this property
> has the important consequence that, considerations of speed apart,
> it is unnecessary to design various new machines to do various
> computing processes. They can all be done with one digital
> computer, suitably programmed for each case. It 'ill be seen that
> as a consequence of this all digital computers are in a sense
> equivalent.

> Shaw:
> This seems to be quite a convincing argument in favour of machines
> eventually being able to think. Can't the brain be considered a
> discrete-state machine: surely a neuron either fires or it doesn't
> and it is this that determines the effect on the rest of the
> brain.

The term that Turning introduces here is quite ambiquious and can form
a missconception, the term 'digital computer' can mean a very large number
of things, he introduces the term 'speed apart' to gain his point. I
dissagree that the brain can be considered a discrete-state machine as
it is a mass of interconnected neurons, not just a neuron. We can consider
almost for certain that one neuron could act as a finite state machine, but
to include the brain and mind, we may be over estimating the situation.

> Shaw:
> One of the arguments that Turing defends against are that machines
> will never be able to be the subject of their own thoughts:

> TURING
> The claim that a machine cannot be the subject of its own thought
> can of course only be answered if it can be shown that the machine
> has some thought with some subject matter. Nevertheless, "the
> subject matter of a machine's operations" does seem to mean
> something, at least to the people who deal with it. If, for
> instance, the machine was trying to find a solution of the
> equation x2 - 40x - 11 = 0 one would be tempted to describe this
> equation as part of the machine's subject matter at that moment.
> In this sort of sense a machine undoubtedly can be its own subject
> matter. It may be used to help in making up its own programmes, or
> to predict the effect of alterations in its own structure. By
> observing the results of its own behaviour it can modify its own
> programmes so as to achieve some purpose more effectively. These
> are possibilities of the near future, rather than Utopian dreams.

> Shaw:
> Is this what is meant by 'thoughts'? Computers can alter their
> behaviour to improve some measure of performance, but they aren't
> really thinking, they are following rules. Surely to say that an
> entity is the subject of its own thought implies that it has a
> concept of itself in relation to the rest of the world. Do we
> consider animals to be the subject of their own thoughts when they
> learn to perform tasks with greater aptitude?

"It may be used to help in making up its own programmes, or
 to predict the effect of alterations in its own structure."
 
Can we from this determine that when making up its own programmes
the machine can be said to be learning, and from this we could say
machine learning? Essentially making up its own programmes can
be percieved as altering a rule base or action base. This in its
simplest form can be considered as being self aware, but cannot be
described as being as self aware as the mind. If we can consider
self awareness as levels then the Human mind could be considered the
most self aware, with animals next and the self awareness of a self
altering program could be considered to be a low form of self awareness,
but still arguably self aware.

> Shaw:
> Another interesting criticism is that machines can only ever do
> what we tell them, to which the answer is:

> TURING
> One could say that a man can "inject" an idea into the machine,
> and that it will respond to a certain extent and then drop into
> quiescence, like a piano string struck by a hammer. Another simile
> would be an atomic pile of less than critical size: an injected
> idea is to correspond to a neutron entering the pile from without.
> Each such neutron will cause a certain disturbance which
> eventually dies away. If, however, the size of the pile is
> sufficiently increased, tire disturbance caused by such an
> incoming neutron will very likely go on and on increasing until
> the whole pile is destroyed. Is there a corresponding phenomenon
> for minds, and is there one for machines? There does seem to be
> one for the human mind. The majority of them seem to be
> "subcritical," i.e., to correspond in this analogy to piles of
> subcritical size. An idea presented to such a mind will on average
> give rise to less than one idea in reply. A smallish proportion
> are supercritical. An idea presented to such a mind that may give
> rise to a whole "theory" consisting of secondary, tertiary and
> more remote ideas.

> Shaw:
> These analogies are interesting, because human beings are
> constantly thinking in some way or another without requiring
> explicit provocation. In some cases, thought is clearly
> structured, for example when we are solving a problem, but the
> rest of the time we can decide what to devote our thoughts to,
> subject to some initial stimulus. This can result in our 'state
> of mind' changing, so that, for example, after a period of time
> with no external stimulus, our response to a question might
> change. Perhaps if a machine could be seen to exhibit this kind
> of behavior, it could be considered to be 'thinking'.

The Human mind to a certain extent has the ability to percive what
its thoughts are directed to, but in some cases of extreme stimulus
such as emotions then the mind gets channeled into thoughts directed
at the stimulus. The idea of secondary tertiary and more remote idea's
is an interesting point as most machine computation is based upon the
idea that there is a finite reply to a given rule, and in most cases only
one. To develop the idea of remote thought considerable attention must be
paid to the wandering mind idea, e.g. the mind and thoughts may wander
from one thing to the next without prior thought, e,g like dreams.

> Shaw:
> Towards the end of the paper, Turing considers the possibility of
> 'educating' a primitive machine:

> TURING
> In the process of trying to imitate an adult human mind we are
> bound to think a good deal about the process which has brought it
> to the state that it is in. We may notice three components.
>
> (a) The initial state of the mind, say at birth,

Surely there are different mind states at birth? Different levels of
perception?

> TURING
> (b) The education to which it has been subjected,

Again different levels of education?

> TURING
> (c) Other experience, not to be described as education, to which
> it has been subjected.
>
> Instead of trying to produce a programme to simulate the adult
> mind, why not rather try to produce one which simulates the
> child's? If this were then subjected to an appropriate course of
> education one would obtain the adult brain. Presumably the child
> brain is something like a notebook as one buys it from the
> stationer's. Rather little mechanism, and lots of blank sheets.
> (Mechanism and writing are from our point of view almost
> synonymous.) Our hope is that there is so little mechanism in the
> child brain that something like it can be easily programmed. The
> amount of work in the education we can assume, as a first
> approximation, to be much the same as for the human child.

> Shaw:
> This paragraph seems to overlook some important points: The
> child's brain is presumably immediately capable of experiencing
> emotion, which must be a strong factor in determining its actions.
> This is combined with a vast array of sensory inputs which
> contribute to the child's emotional state, so that 'education'
> could not be encapsulated in a simple dialog with a teacher.
> Furthermore, the child has a strong incentive to learn: survival.
> What motivation would a machine have to learn, wouldn't it need to
> experience pleasure and pain and other emotions as well? Surely
> the ability of a machine to learn to interact with human beings
> would depend on its ability to sympathise with their situation
> through experience of similar situations - wouldn't this require
> emotion?

The points that are made by Leo are very true, how can we seperate
the life experiences into sections, surely there is a massive overlap,
iteractions with other mind allow expansion of our own, this is
determinate of how we think and percive situations. Emotion would be
needed to give total emulation of the mind, but not essentially for a
positive example of a Turing test, emotions could be emulated around
rules based on responses which would allow an interogator to be fooled.
Anyone remember Ridley Scotts 'Blade Runner'? The Voigt-Kampff test on
the Replicants? Is this an adaptation of the Turing test to be inclusive
of an emotional response?

Primary: Shaw, Leo <las197@ecs.soton.ac.uk>
Respondant: Nick Worrall <nw297@ecs.soton.ac.uk>



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:26 GMT