Edmonds: Constructibility of AI

From: Crisp Jodi (jc1198@ecs.soton.ac.uk)
Date: Tue May 01 2001 - 01:50:30 BST


The Constructability of Artificial Intelligence (as defined by the Turing
Test) - Bruce Edmonds

> EDMONDS:
> 1. Dynamic aspects of the Turing Test
> The elegance of the Turing Test comes from the fact that it is not a
> requirement upon the mechanisms needed to implement intelligence but on
> the ability to fulfil a role. In the language of biology, Turing specified
> the niche that intelligence must be able to occupy rather than the anatomy
> of the organism.

Crisp:
Since intelligence is not actually being required for implementation here,
the definition of what intelligence is does not need to come into
question. The actual usefulness of 'the ability to fulfill a role'
does need to be considered though, and its relation to intelligence.
Fulfilling an 'intelligent' role could be the key to intelligence, since if
something, for example, walks like a duck, quacks like a duck, etc, it may
as well be a duck. Of course, what is actually an intelligent role is
questionable.

> EDMONDS:
> The role that Turing chose was a social role - whether humans
> could relate to it in a way that was sufficiently similar to a human
> intelligence that they could mistake the two.

Crisp:
The social role is one that is not directly dependent on biology, and in
that regard, makes it a suitable role.

It could be argued that the social role is definitely not needed for someone
to be intelligent - for example, there are cases, where children have been
kept in isolation and do not learn language or social skills. It
could then be argued back that they have the potential to learn these, but
then, so might a computer. One thing to consider with humans is that there
seems to be a critical period for language acquisition, and whether such a
thing would exist on a computer.

The social role seems at times purely arbitrary, but the only way that we
can really have any inclination that other people may be intelligent is
through their social interactions that we observe. Therefore, in the same
way, by observing the social interactions of computers, we may get an
inclination also of their intelligence. Since we don't actually know that
other people are intelligent, due to the Other Minds problem, we can never
actually know whether computers are either. Therefore by applying the same
rules we seem to unknowingly apply to humans, the social role seems like the
obvious choice to 'test' intelligence.

> EDMONDS:
> What is unclear from Turing's 1950 paper, is the length of time that was
> to be given to the test. It is clearly easier to fool people if you only
> interact with them in a single period of interaction.

Crisp:
The test should not 'fool' people, since that implies that at any point,
people might find out the 'truth', at which point the subject has failed the
test. Instead of fooling people, something that passes the test could be
taken to actually be the truth, since to actually be true implies no
failures.

> EDMONDS:
> The longer the period of interaction lasts and the greater the variety of
> contexts it can be judged against, the harder the test.

Crisp:
Continuing with the social aspect, and the party analogy, 'the deeper
testing of abilities' could be seem as being similar to meeting someone - if
you talk to them for a longer amount of time, and think things are going
well, you may become more convinced they are your friend, and if you meet up
with them in different social situations, their friendship with you may be
'put to the test'. The chess analogy that is given is also a useful example,
although often you would not question a person constantly on the same
subject.

> EDMONDS:
> The ability of entities to participate in a cognitive 'arms-race', where
> two or more entities try to 'out-think' each other seems to be an
> important part of intelligence.

Crisp:
The problem with this trying to 'out-think' each other is that even in
humans, people will pretend to know things that they don't and will often
get facts wrong or try to make them up to look more intelligent. Humans do
not each possess in equal quantities the skills that we seem to call
intelligence. For example, using the chess example, questioning someone
about chess who has claimed to be a chess expert, may fail because you
yourself don't know about chess, and they may not be a chess expert, but a
drafts expert, but this doesn't mean either party lacks intelligence.

> EDMONDS:
> I will adopt a reading of the Turing Test, such that a candidate must pass
> muster over a reasonable period of time, punctuated by interaction with
> the rest of the world.

Crisp:
The question of reasonable period of time can be debated, since to
actually properly pass the test, the candidate should be able to always
pass, even many years later, and also be able to pass the test to everyone,
not just a few people. Interaction with the rest of the world is important,
although at this point in the paper, what exact interaction with the rest of
the world is unclear - does this mean gaining sensorimotor information, or
just extending their knowledge base, as in reading a newspaper with a chess
article as mentioned earlier?

> EDMONDS:
> It requires the candidate entity to participate in the reflective and
> developmental aspects of human social intelligence, so that an imputation
> of its intelligence mirrors our imputation of each other's intelligence.

Crisp:
The distinction that the long-term Turing Test is trying for 'reflective and
developmental aspects of social intelligence' is important, since it seems
to be saying that a candidate who passes won't possess any old intelligence,
they'll only possess intelligence of the human social variety, and only
reflective and developmental aspects at that. This is good, since the
definition of intelligence used is defined and therefore there are less
problems, such as that a person can be intelligent without being socially
intelligent or an animal might be intelligent but lack human intelligence.
Similarly, mirroring our imputations of each other's intelligence is
significant, as stated earlier, in relation to the other minds problem.

> EDMONDS:
> That the LTTT is a very difficult task to pass is obvious (we might
> ourselves fail it during periods of illness or distraction)

Crisp:
Even if we do fail it during periods of illness or distraction, this does
not distract from the test, since it has already been stated that it is only
to test one type of intelligence, and even in periods of illness /
distraction, we will still possess others.

> EDMONDS:
> In addition to the difficulty of implementing problem-solving, inductive,
> deductive and linguistic abilities, one also has to impart to a candidate
> a lot of background and contextual information about being human
> including: a credible past history, social conventions, a believable
> culture and even commonality in the architecture of the self. A lot of
> this information is not deducible from general principles but is specific
> to our species and our societies.

Crisp:
This description aids to tell us that the problem is definitely not a
trivial one, but involves many complexities, and solving it may well be
tough.

> EDMONDS:
> 2. The Constructability of TMS

Crisp:
In this section, Edmonds argues that deliberately constructing an
intelligent machine as a result of an intended plan may well not be
possible.

> EDMONDS:
> The definition of a TM is not constructive - it is enough that a TM could
> exist, there is no requirement that is be constructable. This can be
> demonstrated by considering a version of Turing's 'halting problem'.
> Whatever method we have for constructing TMs from specifications there
> will be an n for which we can not contruct TM(n), even though TM(n) is
> itself computable.

Crisp:
This is shown with the halting problem, since it can be implemented as a
simple look-up table, but may be a problem with the TM, which seems to be
functionally independent, where as if the halting problem was just part of a
bigger system, this may not be a problem.

In any case, we should not lose track of the original problem -
participating in the reflective and developmental aspects of human social
intelligence, and mirroring our view of other human beings'
intelligence. Therefore, we need to realize that human beings suffer from a
version of the halting problem - they will die and be unable to write down
afterwards when they died, where as another person could do so. Therefore,
we need constructable TMs, since every human intelligent function is
constructable. Or at least, for the TMs to possibly be part of a larger
system.

> EDMONDS:
> What this shows is that any deterministic method of program construction
> will have some limitations. What is does not rule out is that some method
> in combination with input from a random 'oracle' might succeed where the
> deterministic method failed.

Crisp:
Humans could be seen as a deterministic method of program construction by
some lines of thought, even if one does not believe in the main forms of
determinism, the deterministic equivalent of the halting problem - death,
can not be specifically ignored. This shows that humans themselves have
limitations.

Input from a random 'oracle' may well be a look-up table, which can then
defeat the halting problem, but the real question is if we actually want to
defeat it as such.

> EDMONDS:
> The TT is well suited to this purpose, because it is a post-hoc test.
> It specifies nothing about the construction process.

Crisp:
The system that is tested by the TT is made before the test, and thus some
level of design has to already be done, even if the rest is learnt or
extended afterwards.

> EDMONDS:
>3. Artificiality and the Grounding of Knowledge

Crisp:
In this section, Edmond addresses most of the problems that were found with
his comments earlier.

> EDMONDS:
> Although we can say we constructed the entity before it was put into
> training, this may be far less true of the entity after training. To make
> this clearer, imagine if we constructed 'molecule-by-molecule' a human
> embryo and implanted it into a woman's womb so that it developed, was born
> and grew up in a fashion normal to humans. The result of this process
> would certainly pass the LTTT and we would call it intelligent, but to
> what extent would it be artificial?

Crisp:
This is a good point, because there seems no point in calling something
artificial when this is the case, because surely that just makes it be a
human being, even if it was constructed, since for all intents and
purposes, it's exactly the same.

> EDMONDS:
> We know that a significant proportion of human intelligence can be
> attributed to the environment anyway and we also know that a human that is
> not expose to language at suitable age would almost certainly not pass the
> LTTT.

Crisp:
Edmond admits that constructing an AI system similar to a human may not have
to have the traits that he mentioned earlier.

> EDMONDS:
> Given the flexibility of the processes and its necessary ability to alter
> its own learning abilities, it is not clear that any of the original
> structure would survive. After all, we do not call our artifacts natural
> just because they were initiated in a natural process (our brains), so why
> vice versa?

Crisp:
This seems like a strange example to give, since processes of the system do
not seem much like artifacts, more like thoughts. This may just be bad
wording, and Edmond may just mean thoughts, but we would probably think of
our thoughts, at least in part, as a natural process anyway. Since it can be
argued that not much of ourselves actually survives as the years progress,
it may not be a problem that not much of an original AI structure would
survive either.

> EDMONDS:
> The TT, as specified, is far more than a way to short-cut philosophical
> quibbling, for it implicates the social roots of the phenomena of
> intelligence. This is perhaps not very surprising given that common usage
> of the term 'intelligence' typically occurs in a social context.

Crisp:
Despite its problems, the TT still indicates towards the relationship
between intelligence and the social context, and thus implies that one way,
although by no means the only way, of signifying intelligence could be
through the social context. The TT itself may not be the way to gain
knowledge of this social context, but it at least leads us towards some sort
of direction of the nature of intelligence.

> EDMONDS:
> I am characterizing intelligence as a characteristic that it is useful to
> impute onto entities because it helps us to predict and understand their
> behaviour.

Crisp:
Instead of just abandoning the definition of intelligence as being
trivial, Edmond is saying that if we can say what intelligence
is, even in just one form, we may be able to use this for more useful
things.



This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:30 BST