Re: Turing Test

From: HARNAD, Stevan (harnad@coglit.ecs.soton.ac.uk)
Date: Wed Feb 16 2000 - 20:34:21 GMT


http://cogprints.soton.ac.uk/abs/comp/199807017

On Thu, 10 Feb 2000, Worrall, Nicholas wrote:

> Worrall:
> Given the idea that the test can be simulated as a machine requiring the
> ability to 'think' in the same way as the interegator, surely it is
> simply possible to say that it can 'emulate' the way that the
> interrogator asks questions to decieve the other entitys. Given the
> questions of the interrogator the entitys must respond with similar
> characteristics such as time delay and possible confusion to allow for
> the ambiguiety of thought.

Well, first off, there's one thing I can tell here without
Turing-Testing, and that is that you did not use a spell-checker!

Please everyone, all messages must be spell-checked before being sent
off!

Now to content. Call it imitation, emulation, or simulation, the
question is the same: Would such a system really be thinking?

> > TURING:
> > We also wish to allow the possibility than an engineer or team of
> > engineers may construct a machine which works, but whose manner of
> > operation cannot be satisfactorily described by its constructors
> > because they have applied a method which is largely experiment
>
> Worrall:
> While the term 'experimental' may be valid towards the idea of machine
> learning and is most likely meant in this case, we must still consider the
> idea that the 'experiment' may just be a new architecture. Turing suggests
> that the constructors ( Which is a undefined group) may know parts of the
> system individually but not know the entire system. His words also refer
> that the operation ( Or rule base) cannnot be understood as a collective
> entitiy, which in this term is most likely meaning unsupervised machine
> learning?

Turing is not committing himself to any particular algorithm here, only
to a system that is designed, and works, but its designers don't know
how/why it works.

> Worrall:
> I dissagree that the brain can be considered a discrete-state machine as
> it is a mass of interconnected neurons, not just a neuron. We can consider
> almost for certain that one neuron could act as a finite state machine, but
> to include the brain and mind, we may be over estimating the situation.

I agree.

> TURING:
> "It may be used to help in making up its own programmes, or
> to predict the effect of alterations in its own structure."
>
> Worrall:
> Can we from this determine that when making up its own programmes
> the machine can be said to be learning, and from this we could say
> machine learning? Essentially making up its own programmes can
> be percieved as altering a rule base or action base. This in its
> simplest form can be considered as being self aware, but cannot be
> described as being as self aware as the mind. If we can consider
> self awareness as levels then the Human mind could be considered the
> most self aware, with animals next and the self awareness of a self
> altering program could be considered to be a low form of self awareness,
> but still arguably self aware.

I think you are being far too generous with what you are ready to call
"self-aware" in the case of machines. (And why "self"? Isn't "aware"
[conscious] enough? And why "levels"? If something is aware, only of
"ouch", isn't it already up there with us in the relevant respect -- though
it is not necessarily very smart -- whereas if not, it's not?)

So don't give it all away in advance: We're (Turing-) TESTING for
something here, something we haven't quite got yet...

> Worrall:
> The Human mind to a certain extent has the ability to percive what
> its thoughts are directed to, but in some cases of extreme stimulus
> such as emotions then the mind gets channeled into thoughts directed
> at the stimulus. The idea of secondary tertiary and more remote idea's
> is an interesting point as most machine computation is based upon the
> idea that there is a finite reply to a given rule, and in most cases only
> one. To develop the idea of remote thought considerable attention must be
> paid to the wandering mind idea, e.g. the mind and thoughts may wander
> from one thing to the next without prior thought, e,g like dreams.

It seems to me you can build in as many hierarchical levels of machine
function as you like, it's not at all clear why any of it would be
thinking or ideas...

> > TURING
> > (c) Other experience, not to be described as education, to which
> > it has been subjected.
> >
> > Instead of trying to produce a programme to simulate the adult
> > mind, why not rather try to produce one which simulates the
> > child's? If this were then subjected to an appropriate course of
> > education one would obtain the adult brain. Presumably the child
> > brain is something like a notebook as one buys it from the
> > stationer's. Rather little mechanism, and lots of blank sheets.
> > (Mechanism and writing are from our point of view almost
> > synonymous.) Our hope is that there is so little mechanism in the
> > child brain that something like it can be easily programmed. The
> > amount of work in the education we can assume, as a first
> > approximation, to be much the same as for the human child.
>
> > Shaw:
> > This paragraph seems to overlook some important points: The
> > child's brain is presumably immediately capable of experiencing
> > emotion, which must be a strong factor in determining its actions.
> > This is combined with a vast array of sensory inputs which
> > contribute to the child's emotional state, so that 'education'
> > could not be encapsulated in a simple dialog with a teacher.
> > Furthermore, the child has a strong incentive to learn: survival.
> > What motivation would a machine have to learn, wouldn't it need to
> > experience pleasure and pain and other emotions as well? Surely
> > the ability of a machine to learn to interact with human beings
> > would depend on its ability to sympathise with their situation
> > through experience of similar situations - wouldn't this require
> > emotion?
>
> Worrall:
> The points that are made by Leo are very true, how can we seperate
> the life experiences into sections, surely there is a massive overlap,
> iteractions with other mind allow expansion of our own, this is
> determinate of how we think and percive situations. Emotion would be
> needed to give total emulation of the mind, but not essentially for a
> positive example of a Turing test, emotions could be emulated around
> rules based on responses which would allow an interogator to be fooled.

How do we get from rules and responses to emotions? And if all I have
is "ouch" do I not have emotions, even if I'm not social and can't be
fooled?

So how/when could/would a machine have "ouch"?

> Worrall:
> Anyone remember Ridley Scotts 'Blade Runner'? The Voigt-Kampff test on
> the Replicants? Is this an adaptation of the Turing test to be inclusive
> of an emotional response?

I haven't seen it, but I expect the notion of the mental life (or lack
of it) of the replicants was as incoherent as that of the Vulcans or
Data. According to the TT, if they can pass as one of us in other
respects, the fact that they may be rather dull, or phlegmatic, or
listless, or mechanical does not mean they don't have minds (I have
friends like that!).

Stevan



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:26 GMT