Re: The Other-Minds Problem

From: HARNAD Stevan (harnad@cogsci.soton.ac.uk)
Date: Tue Jun 04 1996 - 21:28:44 BST


> From: "Tchighianoff, Caroline" <cgt195@soton.ac.uk>
> Date: Fri, 24 May 1996 14:10:58 GMT
>
> The basic idea of the other minds body problem is that if we believe we
> have a mind we can not guarantee that everybody else also has a mind.

We can know that we have feelings; we can't be sure who/what else does.

> We can only assume they do by observing their behaviour. For instance
> when somebody hurts us, we experience pain and may show this by
> behaviour such as cryng. But if we hurt somebody else and they cry we
> can not actually guarantee they feel pain like we do. They may just be
> acting out the appropriate way to behave when hurt, which they have
> learnt from observing others responses to certain situations which may
> be seen as harmful.

That's true, but it's only a philosopher's problem when it comes to
people -- or even animals that are like us. But when it comes to robots
and artificial intelligence -- when it comes to the attempt to reverse
engineer the mind -- the other-minds problem becomes a practical
methodological problem for cognitive theorists.

> However, if a scientist is capable of illustrating
> the brain pattern that occurs when a person says they are experiencing
> pain, and that this pattern is consistent across all individuals, then
> this may be seen as evidence to suggest we all in fact experiencing
> pain in the same way.

Yes, but as I said, that isn't our problem; our problem is understanding
HOW the brain -- or anything else that can do what the brain can do --
works. Can brain images tell us? Can they help us reverse-engineer the
mind?

> The turing test helps to illustrate the other-minds problem by showing
> how a person can communicate with another person in one room, and a
> computer in another through teletype without being able to identify any
> differences between the human and the computer. From this it may be
> suggested that the computer is intelligent and therefore has a mind.

What about Searle's argument against this conclusion?

> However for the computer to have a mind it must be able to behave in
> the same way as any individual in real life situations, as the total
> turing test illustrates. Therefore the computer would need to be a
> robot. If the robot can behave like a human in different situations
> then we can therefore assume that it too has a mind. As we have no
> eviednce that other individuals have got a mind, then we can not
> conclude through lack of evidence that a robot does not have a mind.

We have no better or worse grounds for believing that a person has a
mind than we do for believing a robot has a mind, if we cannot tell them
apart in any way except that one is man-made and different on the
inside.

> However we must take into consideration that a robot is created by a
> human, and so in this sense it does have a mind, but is just a very
> complex creation.

I'm afraid this is not clear at all: Why does the fact the robot is
man-made imply it does have a mind?

More careful reading of the "Other Bodies/Other Minds" paper
will be needed to answer this question well.
http://cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad91.otherminds.html



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:23:44 GMT