Re: Mind/Body Problem

From: Stevan Harnad (harnad@coglit.soton.ac.uk)
Date: Wed Mar 03 1999 - 17:33:44 GMT


On Wed, 3 Mar 1999, T.Jones wrote:

> 1. For 100% proof you need a contradiction, so if e=mc2, at
> the same time e cannot not equal mc2, otherwise the formula
> would not work, therefore isn't this a contradiction? -
> meaning that this must be beyond a shadow of a doubt?

Good question, but here's the problem: The contradiction would only
arise if e were to equal mc2 AND e were NOT to equal mc2 at the same
time.

So you can be certain, ABSOLUTELY certain, certain with the clearest of
Cartesian (Descartes) clarity, beyond any shadow of doubt that:

"It is false that both e=mc2 and not[e=mc2] are true."

But you can't be sure that "e=mc2 is true"!

Notice that you could have substituted anything (true or false) for
e=mc2, say, "XY=Z," and you could still be sure that, whatever you said,
XY=Z could not be both true and false. But it does not follow from that
that XY=Z is true!

And this is the real difference between mathematical truths and
scientific ones. All the mathematical truths (like 2+2=4) are not just
true, but NECESSARILY true, on pain of contradiction: If they were
false, it would lead to a contradiction. But the scientific truths,
like e=mc2, are only true on the strength of the supporting evidence, in
other words, they are very probably true, but not necessarily true.

> 2. Is the mind-body problem an inability of other people to
> objectively observe the mental state of a certain
> individual?,

No, that's the Other-Minds Problem, but they are closely interconnected.
The Other-Minds Problem is the problem of not being able to know (for
sure) that anyone but I myself have a mind. The Mind/Body Problem is the
problem of understanding the relationship between mind and body, between
mental states and physical states. It's easy to say that mental states
simply ARE physical states; but once you've said that, it's not nearly so
easy to understand what that could possibly mean!

In what way is experiencing a pain, or a depression, or even a smell
"physical"? It's obvious that certain things are going on in my body
when I feel a pain, but what is the pain itself? If you say "it's what
happens when certain nerves jangle and certain chemicals are released,"
I can still wonder: Yes, but why does it FEEL like something when those
nerves jangle and those chemicals get released? And that's just the same
question over again. So it has not been answered.

To put it another way: Does anyone feel they understand how FEELINGS can
actually be physical things (and only physical things)?

> as the only way in which that individual can
> explain/prove his/her mental state to other people is by
> relating it to physical states? i.e anxiety can be proved
> by butterflies in the stomach, which due to changes in the
> blood flow, has a physical basis.

(No "proof" outside maths, but never mind.)

We can understand anxiety as being CAUSED by physical things: We all
know that taking certain drugs can cause us to feel in a certain way.
But to say only that physical states CAUSE mental states is a form of
"dualism," because it implies that they're not both the same kind of
thing. To solve the Mind/Body problem (without resorting to dualism,
which basically concedes that there are two kinds of things in the
world, physical and mental, and they're not the same kind of thing, though they may
interact), one would have to say what physical thing is the SAME as
anxiety, not just what causes anxiety, or correlates with or
accompanies it, but what physical thing IS anxiety (and how? and why?).

That's not easy to do in a way that is both convincing and makes sense.

> 3. Re-phrasing question 2, is it the inability to provide
> evidence which turns thinking/feeling/cognition from a
> subjective to an objective experience?

No, the evidence problem is the other-minds problem. It is related to
the mind/body problem but they are not the same. One is about what you
can or cannot KNOW about whether or not something is or has a mind, the
other is about what being or having a mind IS. One is about what you can
know, the other is about what there is.

(In philosophy, questions about what you can or cannot KNOW are called
"epistemic" questions and questions about what there IS (what EXISTS)
are called "ontic" questions.)

But it's not just whether or not others have minds that you can't know
for sure, can't know in the way you CAN know that you yourself have a
mind. Solving the Mind/Body Problem itself might (according to some
thinkers) might require a capacity for knowing that we don't have.
Maybe it would be transparently obvious to a being with more
intelligence than we have exactly how anxiety is really the same as some
physical state; it just isn't obvious to us, in fact it is not
understandable to us at all.

> 4. To understand the brain, you said that we would probably
> have to model it. Yet we can't do this with the mind as it
> has no physical basis on which models can be constructed,
> so how are we going to be able to investigate it?.

You're certainly asking the right question!

The Other-Minds problem, when it comes to my worries about whether YOU
have a mind, is not a practical problem. I can't be
"Descartes-Doubt-free" or "Cartesianly Certain" that you have a mind,
but I can be at least as confident of it as I am of the fact that e=mc2.

But when it comes to worries about whether a MODEL we've built has a
mind, that's a completely different story: There, explaining the mind
does encounter a big problem that no other branch of science has.

> Based on
> this, is there any chance that artificial intelligence
> (implying an ability to think, feel for its self) could be
> developed?.

We can still try. That's where the Turing Test comes in (see below).
Maybe if we can build a model that can DO everything we can do, it will
FEEL too (if for no other reason than for the very same reason that
biological evolution, when it built US to be able to do all the things
we can do -- so as to survive and reproduce -- also seems to have had to
build as to FEEL as we do).

> Also, how can we be sure that it has been
> developed, ie that it is thinking for it's self, since
> there is no objective evidence to show that thinking is
> actually taking place ?(other-minds)

We can never be sure. Not just for the same reason we can never be sure
about e=mc2 in the same way we can be sure about 1+1=2, but for an
extra reason, unique to the branch of science that attempts to explain
the mind, namely, the mind/body problem.

I will close with a relevant little piece I just wrote for BBC's
Tomorrow's World about Artificial Intelligence, Computers, and the
Turing Test.

Stevan Harnad

Date: Tue, 2 Mar 1999 15:32:12 +0000 (GMT)
From: Stevan Harnad <harnad@coglit.soton.ac.uk>
To: Elinor Hodgson-SCIENCE <elinor.hodgson@bbc.co.uk>
Subject: Re: BBC Tomorrow's World

Dear Elinor,

(1) "Intelligence" is the CAPACITY underlying what intelligent creatures
like people and animals are able to DO (get about in the world, perceive
patterns, learn, master special skills -- maths, chess, billiards -- and
language).

So far, this is uncontroversial, and doesn't require "defining"
intelligence, because we all know what creatures are, and the many smart
things they can do. So there is no problem with saying that
"intelligence," whatever it turns out to be, is the "machinery"
underlying all those things they can do -- the basis in their brains for
all those abilities.

(2) We do not yet know the basis for the capacity (or capacities) we are
calling intelligence. We do not know how creatures are able to do the
many intelligent things they can do. There are only two ways to try to
understand how: One is through brain science, studying the brain to
try to find out how it manages to do all those things; the other is
through Artificial Intelligence, to try to design machines that can
do such things.

Brain Science turns out to be very slow in answering questions about
the brain's capacities. Just peeking and poking at the brain (even in
today's era of "brain imaging") has not revealed to us how the brain
manages to do the many intelligent things it can do.

The second way, trying to design smart machines ( = Artificial
Intelligence) splits into two as well, based on the goals of the
research. The goal can be (AI-1) simply to get machines to do smart
things, because that is useful to us. The other goal would be (AI-2) to
get machines to do smart things in a "natural" way, because we want to
know how real creatures and their brains do it.

Both of these approaches could be called "artificial intelligence"
("AI") because both try to design smart machines, though you might want
to reserve AI for the AI-1 only (designing smart machines because they
are useful in doing things for us), calling AI-2 something like
"cognitive modelling," because it is trying to model the way the
mind/brain does things, and not just trying to get machines to do
useful things for us.

This difference in motivation for the two kinds of AI will come up
again. Cognitive Modeling (AI-2) is obviously closer to the goals of
Brain Science, but its focus is on the details of generating the
intelligence, not the details about the brain.

(2) There are many different kinds of machines, but among them,
computers are special, because they have certain "universal" powers.
Mathematicians and logicians have known intuitively for centuries
what "computing" is when they themselves are doing the computing, but
it was only in this century, with the work of Turing, Goedel, von
Neumann and others, that they have tried to describe and formalise
exactly what "computing" is. The result was the birth of the theory of
computation (Turing, Goedel, Church, Post) and the birth of the digital
computer itself (von Neumann, Turing), a mechanical device that could
do what mathematicians called "computing."

At first it looked as if all these mathematicians and logicians had
produced several different theories of what computing was, but then the
theories turned out, although they each looked different, to be all
variants of the very same thing. So mathematicians became confident
that they had "captured" what computing was, because their many
different attempts to make it explicit all ended up being the same.

Turing's theoretical "Turing Machine" -- a hypothetical device that had
a number of different internal "states," and that could read and write
on an input tape, with the internal state it went into being determined
by what it read on its tape and what state it were already in at the
time -- was completely mindless and mechanical, yet it could DO things
that only intelligent creatures had been able to do until then.
(Actually it was not the Turing Machine, which is merely theoretical,
but the actually physical machines built according to the theory,
digital computers, that could do these intelligent things.)

So the first case of Artificial Intelligence was computers themselves,
doing calculations and deductions that until then could only be done by
intelligent people.

(3) The criticism immediately arose that this was not really
"intelligence," precisely because it was mindless and mechanical,
whereas humans and animals have minds, and do the intelligent things
they do consciously and deliberately, rather than mindlessly and
mechanically. So if machines can do "intelligent" things, they must be
doing it the "wrong way," and hence the things they do are not
intelligent after all.

It was to counter this objection that Turing proposed his Turing Test.
[Now note: the description I am about to give of the Turing Test is my
own interpretation. I think it is the right one, but it is
controversial, and there are those who may disagree about what Turing
really meant, and what his Test really does and does not show.]

Turing introduced the Turing Test (henceforth "T2") as a party game, in
which a man and a woman leave the room and the players can only
communicate with them in writing; the goal of the game is to guess which
one is the woman and which the man. (They try to fool everyone; they're
out of sight so you can't tell just by looking at them.)

The core of Turing's "thought experiment" was the idea of playing this
game, over and over, but sometimes having one of the candidates being
neither a man nor a woman but a machine, unbeknownst to the players.
Turing's point was that if you sometimes thought the machine was a man,
sometimes a woman (just as you did with real men and women) but it
never occurred to you that it was NEITHER, then, at the end of the
game, when you were told it was a machine, it would be completely
arbitrary to say "well in that case, it was just a trick; it was not
really intelligent."

Turing was pointing out how arbitrary it is to deny that the T2-passing
machine is "really" intelligent BEFORE we have the faintest idea of
what intelligence is. In fact, the goal of AI and T2 is to discover
what intelligence is; so if every time we get a machine to do something
smart we say "well then that's not the real thing," then we are not only
making it logically impossible to study artificial intelligence ("if
it's artificial, it's not real"), but we are also applying to machines
a rule that is much more severe than the one we apply to natural
creatures -- completely arbitrarily (which means illogically): For the
only way we know that any other person is intelligent is that they ACT
that way, and we have no cause to suspect otherwise, because we have no
(relevant) way to tell them apart from other people who really are
intelligent.

I say no RELEVANT way because Turing's point is that the fact that the
T2-passer is a "machine" is in and of itself not relevant. What is a
machine? No one really knows. "Machine" just means "mechanism," and
"mechanism" just means a physical system that obeys the cause-effect
laws (mechanical laws) of the universe. Surely biological systems are
no less causal, hence mechanistic, hence "machines," than anything else
in the universe. (What we usually mean by "machine" is something that
happens to be man-made, but surely that is not in and of itself relevant
to anything concerning intelligence.)

So the point of the Turing Test is that "intelligence IS as
intelligence DOES," and to find out that someone who is otherwise
completely indistinguishable from us ("Turing Indistinguishable") in
everything they do happens to be a machine is to find out nothing
relevant to whether or not they are really intelligent. On the
contrary: because we know how machines work, to successfully design
such a machine is to find out something about the nature of
intelligence!

See:

http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad91.otherminds.html
http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad92.turing.html

(4) With the advent of computation, digital computers, and the Turing
Test (T2), the era of Artificial Intelligence began -- slowly at first,
but picking up the pace especially in the 70's after the publication of
Minsky & Papert's celebrated 1969 critique of "Perceptrons."

Perceptrons (invented by Frank Rosenblat) were rivals to AI. They were
artificial too, but they tried to generate intelligence in a
"brain-like" way, with little units, like brain cells, interconnected,
as neurons are in the brain, getting activated and changing the
strength of their connections to one another on the basis of learning
"experience." Rosenblatt thought that Perceptrons, which could already
learn to recognise some patterns as people do, would eventually be the
key to the way the brain generates intelligence.

Minsky & Papert's book put an end to that, showing that Perceptrons, far
from being able to do all the intelligent things the brain could do,
could not even solve "exclusive-or" ("XOR") problems -- for example,
learning to pick mushrooms that are either red, or that have a long
stock, but not mushrooms that are both red and have a long stock.
People can do that, Perceptrons can't (and one can prove that they
can't), so Perceptrons are the wrong model for intelligence.

What is the right model? Minsky and Papert supported what has since
come to be called "classical" or "symbolic" AI: AI is classical
computation, as Turing and the others had defined it, and that turns
out to consist of: symbol manipulation. An "algorithm" is a rule for
manipulating symbols that will generate a "correct" result. The
formulas we all learned in maths for factoring quadratic equations are
examples of algorithms. They can be applied mechanically, by human or
machine; there is no need to understand what any of the symbols mean,
because the rules for manipulating them are not based on the symbols'
meanings but simply on their "shapes," and their shapes are arbitrary.

The symbol for zero -- "0" or, for that matter "zero" -- does not look
like "zero-ness," nor does the word "red" look red; we use them as an
arbitrary shared convention to mean zero-ness or redness. Symbol
manipulation rules are based only on these arbitrary shapes, not their
meaning. (It doesn't matter what symbol you use for zero, as long as
everyone agrees to use the same one every time.)

Using just the power of symbol systems and algorithms, AI was able to
generate an impressive amount of intelligent activity: chess playing
reasoning, understanding texts, analysing scenes, solving problems. It
looked as if it was going to go the full distance, till it eventually
passed T2.

But it didn't. Although symbol systems could do a great deal, and
might even be the right mechanism for explaining certain parts of our
intelligence -- the parts involving reasoning and language -- there
were many things that intelligent creatures could do with which symbolic
AI did not seem to be making great progress. AI's "toy" systems did not
seem to be growing up into the real thing.

(5) Then in 1980 came the philosopher John Searle's famous "Chinese
Room Argument"

http://www.cogsci.soton.ac.uk/bbs/Archive/bbs.searle2.html

a simple argument with which he showed the limits of AI as convincingly
as Minsky & Papert had shown the limits of Perceptrons.

The argument is still very controversial (and again, my own
interpretation is not shared by all, or even most!): Searle challenged
AI (he called what he was challenging "Strong AI," to distinguish it
from "Weak AI," which was simply the use of computer models as tools to
help understand intelligence). According to strong AI, intelligence is
just a symbol system (computer programme), the brain is irrelevant,
only running the right symbol system matters, and the decisive test of
whether or not we have found the right symbol system, the one that
really has a mind, is whether it can pass the Turing Test (T2).

So Searle proposed to answer Turing, in a way. Turing had said that if
something passes T2, it is irrelevant to tell us that it happens to be a
computer, running a certain programme. It would be arbitrary to
conclude that it was not really intelligent just because of that, for
there is nothing whatsoever that we know about computers, machines,
programmes, etc. that implies that they could not really have a mind,
just like you and me.

So Searle provided a reason that was not irrelevant, and not arbitrary,
why computers could not have minds:

He said: "For those who are ready to conclude from a computer that passes
T2 that the computer really understands the messages it is receiving
and sending, like a pen-pal with a mind: Suppose the Turing Testing was
being done in Chinese rather than in English. Suppose a computer passes
the Turing Test, for a lifetime, indistinguishable to anyone and
everyone from a real (Chinese) pen-pal. Now let me, John Searle,
execute the very same programme the computer is executing, in its
place. I receive the incoming Chinese messages from the real Chinese
pen-pal to the T2-pen-pal, I follow all the symbol-manipulation rules
of the T2-passing programme, and generate the outgoing messages, which
all make sense to the real Chinese pen-pal on the other end. Yet I
understand no Chinese. The incoming and outgoing symbols make
absolutely no sense to me. Well then, if I would not be understanding
Chinese by executing the programme, then neither would the computer. So
much for T2 and the capacity of AI to generate real understanding ( =
real intelligence, = really having a mind)."

That argument, simple as it sounds, published in Behavioral and Brain
Sciences journal in 1980, shook the foundations of AI much the way
Minsky & Papert's book had shaken up Perceptrons ten years earlier.

(6) Searle's Chinese Room Argument (and classical AI's limited success)
opened the door for other alternatives, among them new, more advanced
kinds of Perceptrons called "neural nets," again networks of
neuron-like units, but many layers of them, so that they could solve
XOR and many more complicated problems. Neural Nets are not classical
AI, because they are not just symbol systems, but they are still
artificial, so they are still forms of artificial intelligence.

Classical AI is especially suited for mathematical, reasoning, and to some
extent for language capacities (Searle cast a bit of a shadow on this).
Neural nets are better at learning, especially pattern learning.

(7) Other developments added further new tools to supplement classical
symbolic AI:

The "Symbol Grounding Problem"

http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad89.searle.html

suggested that T2, the pen-pal version of the Turing Test (only symbols
in and symbols out) may not be the right test to aim for if you want to
model the mind: the robotic version, T3, is more in the spirit of
Turing's criterion that the candidate should be indistinguishable from
us in ALL of our capacities (not just the symbolic ones). In fact,
perhaps the only way to pass T2 in the first place is to design a
system that can pass T3: Perhaps our symbolic capacities need to be
GROUNDED in our robotic ones. Perhaps to understand what a word
denotes, it is not enough for a system to have only symbol-manipulating
algorithms: it also needs the capacity for real-world interactions with
the things its symbols denote; for Searle's Chinese Room Argument
applies only to a symbol system that passes the pen-pal T2, not to a
hybrid system that passes the robotic T3.

So, in addition to neural nets for learning, a T3-passer would have to
have sensory and motor transducers, and probably further components other
than just those of a digital computer. The real brain is over 80%
sensorimotor. AI now includes cognitive robotics, and these days AI,
robotics and brain science are joined together in a shared research
programme called "cognitive science" that includes cognitive
psychology, linguistics and even parts of philosophy.

Stevan Harnad
--------------------------------------------------------------------
Stevan Harnad harnad@cogsci.soton.ac.uk
Professor of Cognitive Science harnad@princeton.edu
Department of Electronics and phone: +44 1703 592-582
Computer Science fax: +44 1703 592-865
University of Southampton http://www.cogsci.soton.ac.uk/~harnad/
Highfield, Southampton http://www.princeton.edu/~harnad/
SO17 1BJ UNITED KINGDOM ftp://ftp.princeton.edu/pub/harnad/



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:23:54 GMT