Re: Lucas, J. (1961) Minds, Machines and Goedel

From: HARNAD, Stevan (harnad@coglit.ecs.soton.ac.uk)
Date: Mon Feb 21 2000 - 20:44:03 GMT


http://cogprints.soton.ac.uk/abs/phil/199807022

On Mon, 21 Feb 2000, Grady, James wrote:

> > LUCAS:
> > "This formula is unprovable-in-the-system" would be false:
> > equally, if it were provable-in-the-system, then it would
> > not be false, but would be true, since in any consistent
> > system nothing false can be proved-in-the-system, but only
> > truths.

This is an error on Lucas's part. It is only true of system's that
are strong enough to include arithmetic. The propositional calculus,
for example (Boolean algebra of and, not, or, etc.) is both consistent
and complete; so is first-order predicate calculus.

> LUCAS explains how Goedel claims that in any consistent
> system there are always going to be unprovable statements
> which we can seen with our human mind to be true.
>
> LUCAS here pins his whole argument on the prophecy that
> man will never be able to 'Goedel'. What if this assumption
> proves to be false. Given math's incompleteness it must
> have been conceivable to him one day the Goedel algorithm
> would be born.

Not sure what you mean by to "Goedel." If you mean generating a Goedel
sentence (which essentially says "I am not provable from these axioms")
then, if the system includes arithmetic, it will always be possible to
generate such a sentence, and it will always be true (that it is
unprovable), and our minds will always SEE that it is true (it's
obvious: it says it can't be proved, and it's just as true and obvious
that it can't be proved as it is that this sentence ends with the letter
X).

> For example say "I can't understand what I am saying." To
> yourself. Seeing as a machine is unable to lie in this way
> it can't be an adequate model of the mind.

If lying means saying something false, a machine can certainly do that.
What else is lying, if not just that? (You'll find yourself back to
Turing...)

> However it seems
> to me that one machine could resolve such a statement on
> another. Would it be possible for 2 machines in parallel to
> Goedel. And could this be a simplified explanation of the
> mind's Goedel algorithm?

No. More machines won't help; only adding the Goedel sentence to the
axioms will help, but then it's a different system and you have to start
all over again.

> As evolutionary
> creatures we have imperfections and are designed to make
> mistakes, how could you compensate for this in the
> calculation.

It is as easy to generate errors as to generate false statements (so
that does not distinguish minds from machines either).

> What if you allowed a machine to perform a computation
> which would lead to inconsistency if it were flagged up as
> an untruth. This machine would then have 'lied' but still
> be consistent, aware/compensating for its fallacy.

The Goedel argument is not about inconsistency or lying. It is only
about the capability of doing something a machine could not do -- and
that something, it is claimed, is to know the truth of unprovable
Goedel sentences.

Is that a valid argument?

> How relevant is the incompleteness. In the same way a
> female mind could never be a complete model of a male mind
> does an Artificial mind have to be a complete model of a
> human mind.

Well we're trying to bridge the gap between a machine's being able to do
SOME things a mind can do (e.g., add and subtract) but not others. For
it is unsurprising that a machine can do some of the things a mind can
do. (One of the things a mind can do is just SIT there, dumbly, and even
a rock can do that.) What would be surprising would be something that a
mind could definitely do that a machine could not.

Don't conflate Goedel-incompleteness of arithmetic with
computer-incompleteness when it comes to doing mind-like things. There
is a connection, but it is not incompleteness in quite the same sense.

> What if we were able to simulate a human mind on a computer.
> Just because it seems to be a human mind doesn't necessarily
> mean it is.<Turing>

Well, Turing would say it was, if we could not tell the difference...

> Suppose we created
> a replica of a bird egg so identical to the original that
> it was impossible to tell them apart.... Despite
> initial confusion, the destinies of the two alternatives
> would come to pass, leading one to disposal, the other to
> the sky.

Well then it WOULD be possible to tell them apart...

> If in fact we could mend the Goedel in a system 'recursively'
> would the mind always be sufficiently intelligent to grasp
> each new Goedel.

I can't see why not. The principle never changes, no matter how many
times you add the Goedel sentence to your axioms and start again...

> The mind it seems will always have the last word as the
> machine is always limited by what is definite. Any definite
> machine is vulnerable to being out-Goedeled. However he goes
> on to say that one difference is enough to show that they
> are not the same.

I don't know what "definite" is, but certainly both computers and
minds/brains are finite, so that's no difference.

> Notable other differences might be.. miscalculation, guessing
> (a uniquely human version of randomly choosing) and
> imperfection

You think machines can't miscalculate? Or make wrong predictions?

> Human minds are clearly not entirely
> consistent systems so.

True, but irrelevant. Don't conflate the consistency constraint in proof
(anything can be proved in an inconsistent system -- so can the opposite
of anything, if you catch my drift) with human inconsistency.

> It seems it would be far more productive to try to recreate
> the essence of humanity outside the bounds of a formal system.
> Our mind is not restricted by formal methods so breaking out
> is important. However any such arbitrary machine capable of
> shameless contradictions and inconsistencies such as proposed
> here would not be a good model for the mind.

I couldn't follow that. We are talking about formal systems because we
are talking about computation, which is formal (and about the limits
of computation, and formality). But inconsistency is no solution,
because with inconsistency, anything goes...

> > LUCAS:
> > To be able to say categorically that the Goedelian formula
> > is unprovable-in- the-system, and therefore true, we must
> > not only be dealing with a consistent system, but be able to
> > say that it is consistent. And, as Goedel showed in his second
> > theorem---a corollary of his first---it is impossible to prove
> > in a consistent system that that system is consistent.
>
> Any (in)consistency judgments we make are always going to be
> vulnerable as our own consistency (both our math and ourselves)
> is impossible to prove.

It is mostly Lucas's fault, but this vaguer talk about human
inconsistencies is not really relevant to the question of whether
Goedel-limits of computation are evidence that the mind is not just
computational.

> Lucas proposes our discrimination suggests we are not so much
> inconsistent as fallible. However this still binds us to
> compulsion of choice, which fails somehow to account for our
> apparent freedom to make both good and bad decisions.

Apparent freedom. (But that gets into the question of deterministic and
nondeterministic computation and causality).

> > LUCAS:
> > A person, or a machine, which did this without being able to
> > give a good reason for so doing, would be accounted
> > arbitrary and irrational.

So?

> > LUCAS:
> > If the mechanist produces a machine which is so complicated
> > that this ceases to hold good of it, then it is no longer a
> > machine for the purposes of our discussion, no matter how it
> > was constructed. We should say, rather, that he had created
> > a mind, in the same sort of sense as we procreate people at
> > present.
>
> Lucas does seem jump the gun here. OK we have some kind of
> super-machine but LUCAS said earlier that it could be an
> adequate simulation of a mind only if it could do everything a
> mind can do. LUCAS has no real idea of what this super-machine
> could or couldn't do so it seems a little premature to suggest
> it could be some kind of procreated mind.

You're right. Lucas goes completely vague here (maybe even
inconsistent!)

> It seems quite exciting that we might be able to marry up the
> hows and whys of humanity. There is obviously some truth here.
> The idea of critical complexity may well hold water however
> it too seems a little abstract.

Lucas is good if he sets the thinking process going...

Stevan



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:26 GMT