Immediate future of AI

From: Stevan Harnad (harnad@coglit.ecs.soton.ac.uk)
Date: Sat Jul 28 2001 - 10:36:46 BST


On Fri, 27 Jul 2001, Joe Nickell wrote:

> Greetings Professor Harnad, my name is Joe Nickell; I'm a reporter with
> Smart Business Magazine. I am working on a story, due next Thursday (Aug 2),
> about the present and future state of the art of artificial intelligence. As
> is surely obvious, this plays off of the popularity here in the States of
> Steven Spielberg's "A.I." film. My story is aimed at distilling the most
> important trends under way in artificial intelligence research and applied
> science.

> I'd love to hear your thoughts on this subject (I saw you quoted in a story
> in the Daily Telegraph).

Dear Joe,

You might also look at some of the things I have online on the subject,
http://cogsci.soton.ac.uk/~harnad/genpub.html

> Since we are in markedly different time zones (I'm US Mountain Time, two
> hours later than Eastern), I thought I would put forth some questions for
> you to ponder in email. If you feel like responding via email, that would be
> fine; or if you'd prefer that we speak in person, that will work and I can
> call you, provided that we schedule in advance and you let me know a phone
> number to reach you.

Email's much better (minimizes chances of misquoting!)

> 1. What do you see as the most important current trends in scientific theory
> and research in artificial intelligence? I know this is broad, but I suspect
> there are some issues that you personally think are most relevant today....

The most important shift in direction in the past 10 years or so has
been a shift from the purely symbolic approach ("classical AI") to
hybrid symbolic/robotic approaches, including neural nets, but
especially the problem of grounding symbolic capacities in sensorimotor
capacities in the real world.

Classical AI had thought intelligence was just a set of symbols and
rules for manipulating them. It had good reason to think this, and not
only because that strategy actually turned out to succeed for so many
intelligent capacities (chess, scene description, text "understanding",
text production, problem-solving, etc.) that were otherwise completely
inexplicable: Neither psychology nor brain science previously had a
clue as to how we do these things, and Classical AI showed the way (or
rather A way, but thus far the only one) that it could be done.

Another reason for thinking Classical AI would eventually go the
distance was computational theory: Computation is symbol-manipulation,
and there are strong (and probably correct) reasons to believe that
computation can simulate anything and everything. (This is called the
"Church/Turing Thesis".)

Turing, one of the fathers of computational theory also proposed the
"Turing Test," according to which, once computers can do everything we
can do so well that we can't tell their performance apart from our own,
then they must be intelligent.

Turing, A.M. (1950) Computing Machinery and Intelligence. Mind 49
433-460 [Reprinted in Minds and machines.
A. Anderson (ed.), Engelwood Cliffs NJ: Prentice Hall, 1964.]
http://cogprints.soton.ac.uk/abs/comp/199807017

But then problems arose. The first was "scaling". AI's successes were
stunning, and had no competitors, but they did not seem to be scaling
up smoothly enough or quickly enough to eventually cover "everything".
It seemed that to make AI's toy models grow, one had to keep on tacking
on more and more symbolic "knowledge", often customized for the problem
in question, and that seemed neither natural nor economical.

And then there was the philosopher Searle's famous "Chinese Room
Argument" (1980) in which he pointed out that if anyone thought a
computer program that passed the Turing Test for speaking and
understanding Chinese could really understand Chinese, we should
remember that he, Searle, could execute the very same programme
himself, giving everyone the impression that he understood Chinese,
but he would not be understanding a word. He would just be manipulating
symbols. So that can't be what understanding (or intelligence)
really is.

Searle, John. R. (1980) Minds, brains, and programs. Behavioral and
Brain Sciences 3 (3): 417-457
http://www.cogsci.soton.ac.uk/bbs/Archive/bbs.searle2.html

And then came the "Symbol Grounding Problem" (which I formulated), pointing
out that the theory that intelligence is just symbol manipulation
(computation) is the same as the theory that meaning is just
dictionary-definitions. But if you had a Chinese-Chinese
dictionary that defined every word in Chinese, it would be useless to
you unless you already understood at least some Chinese. For otherwise
it would just lead you through an endless series of meaningless
definitions of meaningless Chinese words in terms of other meaningless
(to you) Chinese words, and so on. This "dictionary-go-round" would be
"ungrounded." By the same token, every AI "intelligence" is ungrounded.
How to ground its symbols in something other than just more meaningless
symbols?

Harnad, S. (1990) The Symbol Grounding Problem. Physica D 42: 335-346.
http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad90.sgproblem.html

That's where real-world robotics came in: Dictionary definitions are a
great way to learn the meaning of new symbols, but only if the symbols
in the definitions are already grounded in some other way. One natural
way is sensorimotor categorization: The things that words stand for in
the world are also things that our senses can detect (or learn to
detect) and act upon. Those sensorimotor (robotic) mechanisms that
allow us to learn, recognize and act upon the objects our symbols are
about can also ground those symbols, allowing further combinations of
the symbols to "inherit" that sensorimotor grounding.

But those sensorimotor mechanisms are not just computational or
symbolic: They are dynamical, physical systems.

This new world of hybrid symbolic/nonsymbolic AI now includes neural
networks -- parallel, distributed systems -- that may be somewhat
brainlike, and are especially good at learning sensorimotor categories.

Hybrid systems also include work on the mechanisms of vision, movement,
and learning. And they have even moved into the area of evolution, with
"genetic algorithms" and "artificial life".

All of this is what has led to the new goal of designing systems to
eventually pass the robotic version of the Turing Test, rather than the
classical symbolic ("pen-pal") version. And this is how robotics has
made its way to center-stage: as a way to "ground" the symbols.

> 2. How close are we to the kinds of creations in Spielberg's movie --
> artificial beings that can learn, dream, love, and act like humans? When, if
> ever, do you believe we will see such creations?

You've mixed two radically different things in your question. Learning
and acting are behavioral -- things we can DO. And although the behaviors
are still not that fancy, and we still don't know whether they will
scale, we already have some artificial systems that can do more and
more of those things.

Dreaming and loving, in contrast, are not things we DO, but things we
FEEL. Here we are out of the domain of behavior and Turing Testing, and
right into the old (but real) philosophical problem of consciousness
(the "mind/body" problem). How can I know that ANY other system than
myself, whether natural or artificial, feels?

I go with Turing on this one. Keep working on the behavioral capacity.
Once it scales up to a total capacity indistinguishable from our own,
for a lifetime if necessary -- i.e. once we have a robot that can pass
the Total Turing Test (both sensorimotor and symbolic) (T3) -- then at
least we can say that I have no better (or worse) grounds for believing
(or doubting) that it has real intelligence than I have for anyone or
anything other than myself.

See:

Harnad, S. (1991) "Other bodies, Other minds: A machine incarnation of
an old philosophical problem", Minds and
Machines 1: 43-54.
http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad91.otherminds.html

Harnad, S. (1992) The Turing Test Is Not A Trick: Turing
Indistinguishability Is A Scientific Criterion. SIGART Bulletin
3(4) (October 1992) pp. 9 - 10.
http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad92.turing.html

Harnad, S. (2000) Minds, Machines, and Turing: The Indistinguishability
of Indistinguishables. Journal of
Logic, Language, and Information 9(4): 425-445. (special issue on
"Alan Turing and Artificial Intelligence")
http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad00.turing.html

> 3. At a more basic level, to what extent do you believe it's fruitful to try
> to replicate humans -- as opposed to building tools to extend the
> capabilities of humans?

These are two sides of the same coin. Let's call the general "science"
of intelligence "Cognitive Science" to include these new developments
extending classical AI. Cognitive Science includes both the Forward
and Reverse Engineering of the Mind. Forward engineering is designing
systems that do useful things for people. Reverse engineering is
figuring out how people do those things, again by designing systems
that can do them. It may be possible to design "toy models" that do the
same thing in many ways, some in the humanlike way and some not. But I
doubt that there are as many ways to skin the "Total" cat -- the T3
robot. There, forward and reverse engineering probably converge (and
that's why we can trust the Turing Test).

For now, though, the forward and reverse engineering of the mind will
probably proceed separately, each at its own pace.

> 4. In the next five or ten years, what significant advances would you expect
> to see in applied artificial intelligence? This story is for a section in
> which timelines accompany stories, mapping out some general (or, in some
> cases, specific) predictions for milestones of the coming years. To the
> extent that you can rub your crystal ball and see certain developments on
> the horizon, that would be particularly helpful!

I have no specific date-linked predictions. Forward engineering's
mind-like toys will keep growing, but I doubt they will catch up with
sci-fi robots in the next decade. There will be stunning "tricks" that
enable computers and robots to do more and more things for us, but not
in a particularly mind-like way. (Chess programmes try out every
possible move, something the brain certainly can't do.) Meanwhile basic
research on the reverse-engineering of the mind (and hence the brain)
will continue, but I don't see any fundamental breakthroughs on the
horizon. We will be extending and testing the advances that have
already been made (in machine learning, vision, movement, speech
processing and production).

> Any other thoughts you believe are relevant to these themes are certainly
> welcome. Thanks a bunch for your consideration, and I look forward to
> hearing from you!

It took the Blind Watchmaker (Evolution) millions of years to engineer
the human mind. It's all there in the genetic code, but it is even more
unlikely that we can decipher how he (it) managed to do it just from
the symbols in the genetic code than it is that we can learn Chinese
from a Chinese-Chinese dictionary alone. There is no substitute for
trial and error as we design systems to scale up to the Turing Test.
There may eventually be some new theoretical insights that fast-forward
us closer to T3, but at the moment I think the scaling is still
proceeding at a snail's pace. Our imaginations are still
far-outstripping our productions.

--------------------------------------------------------------------
Stevan Harnad harnad@cogsci.soton.ac.uk
Professor of Cognitive Science harnad@princeton.edu
Department of Electronics and phone: +44 23-80 592-582
             Computer Science fax: +44 23-80 592-865
University of Southampton http://www.cogsci.soton.ac.uk/~harnad/
Highfield, Southampton http://www.princeton.edu/~harnad/
SO17 1BJ UNITED KINGDOM



This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:31 BST