> Goedel's theorem seems to me to prove that Mechanism is false, that is, that
> minds cannot be explained as machines.
> What Lucas is saying in this paper is that is it impossible - by Goedel's
> theorem - to produce a model of the mind using purely mechanical methods, i.e.
> a machine, that can not be intelligent, self-aware or conscious.
I find this statement very hard to come to terms with. It is very
apparent that current state of technology cannot produce mind-like
machines, but it seems irrational to think that technology never will.
Conversely, to have machines thinking and communicating, consciously,
seems wildly science fiction.
> Goedel's theorem states that in any consistent system which is strong enough to
> produce simple arithmetic there are formulae which cannot be proved-in-the-
> system, but which we can see to be true.
This is Lucas' argument throughout the whole paper. Any formal proof
needs to be conducted within the constraints of the machine's formal
system. Any external values cannot be proved. Humans, using external
knowledge of the machine's formal system, can easily see these external
values to be true.
> Essentially, we consider the formula which says, in effect, "This formula is
> unprovable-in-the-system". If this formula were provable-in-the-system, we
> should have a contradiction: for if it were provable-in-the-system, then it
> would not be unprovable-in-the-system, so that "This formula is unprovable-in-
> the-system" would be false: equally, if it were provable-in-the-system, then it
> would not be false, but would be true, since in any consistent system nothing
> false can be proved-in-the-system, but only truths. So the formula "This
> formula is unprovable-in-the-system" is not provable-in-the-system, but
> unprovable-in-the-system. Further, if the formula "This formula is unprovable-
> in-the-system" is unprovable-in-the-system, then it is true that that formula
> is unprovable-in-the-system, that is, "This formula is unprovable-in-the-
> system" is true.
I have had great difficulty in understanding this paragraph. We have a
formula which the system cannot prove. Then Lucas says if it were
provable we would have a contradiction. Well, that's obvious since Lucas
is equating two opposites.
If this formula is provable, it equates to being not unprovable. "This
formula is unprovable" is a false statement given the formula is
proveable. Also, given that the "formula is provable", it would equate to
> Basically, this says that there is a formula which if the system proved true,
> is false, and if proved false, is true. I fully agree with this given the
> system is consistent and not intelligent.
> Goedel's theorem must apply to cybernetical machines, because it is of the
> essence of being a machine, that it should be a concrete instantiation of a
> formal system.
> Our idea of a machine is [...] that its behaviour is completely determined by
> the way it is made and the incoming "stimuli": there is no possibility of its
> acting on its own: given a certain form of construction and a certain input of
> information, then it must act in a certain specific way.
I totally agree with this. All current programming techniques capture
events and process them given unique strategies for each event. In my
view, there is no artificial intelligence in any such system. Only when
processing power becomes greater than the sum of its parts will artificial
intelligence truly exist.
To create artificial intelligence Lucas suggests giving the system
"alternative instructions" for the same event making the system "no longer
completely deterministic". This would be a great step forward allowing
the system to decide which method to execute for each event, which humans
would not be able to predetermine.
> ...consider what a machine might be able to do if it had a randomizing device
> that acted whenever there were two or more operations possible, none of which
> could lead to inconsistency.
Unfortunately this would not lead to any artificial intelligence, but
> A probability distribution would be the good method of choosing a possible
This is the same "guess work" in disguise. Although using a probablility
distribution would greatly increase the chances of the machine choosing
the "best" option, the machine wouldn't be thinking, only calculating the
most likely option which relates back to the original formal system where
every event is predetermined, i.e. given some certain stimuli, the machine
would always choose the same option each time the same stimuli where
Lucas shows that "Machines are definite" and all proofs within the system
are based on simple rules of inference on axioms. These axioms only
relate to the system itself. Nothing can be speculated outside of the
system. We are trying to build a machine which has mind-like behaviour,
by writing down every possibility (in theory anyway as the writing of each
possibility is "given sufficient time".) How are humans meant to write
down the ways we operate when we haven't got a clue! We have no idea how
our minds work, and yet Lucas is trying to formalise them.
> Lucas's main argument [is] that a limited machine cannot be like mind. That
> it can't prove the Goedelian formula, but a human OUTSIDE the system can.
> Is there a Goedelian formula for the mind? One which a machine may be able to
> prove as it stands outside the system?
This makes sense. Human's build machines whose "world" is smaller than
ours. This is why we can see things to be true when the machine cannot -
we can step outside of the machine's system. However, in the same light,
we can see the truths within our "world", but cannot see beyond it. We
have no idea about the afterlife, the full extent of the universe, etc; we
only have speculations - we cannot prove them. This is the same with the
machine, the machine can only prove things inside its own system. If a
machine was large enough to encompass all of our "world" containing the
same (or greater) knowledge than us, then the Goedelian formula would not
apply. Or should I say it would exist, there would still be things we
cannot prove, but we would not know (or be able to prove) anything the
machine couldn't, resulting in the potential for the machine to be equal,
or even greater, than our mind.
But, within our own system it seems humans are able to handle falseness.
Humans have the ability to say something known to be false, but a machine
cannot. However, just like a program in C can use a printf statement to
output an inconsistent response, so the human mind can verbally output
known inconsistencies. However, I believe that if they are known to be
false, we don't actually believe them. When we lie, we know we are lying.
Therefore if I said 1+1=2 and then 1+1=0 it looks as though I am being
inconsistent. Am I? I might say those words, but I know that 1+1=2.
> This shows that a machine cannot be a complete and adequate model of the mind.
> It cannot do everything that a mind can do, since however much it can do, there
> is always something which it cannot do, and a mind can.
> This is true because Lucas has said that his definition of a machine cannot
> have attributes that the mind has, and therefore he can say that a mind can
> always do things a machine cannot do.
One of the constraints of a machine is that it must be "definite". This
seems too strict. Lucas says anything which was indefinite or infinite we
 should not count as a machine. I know that I cannot write down
formally how I reach conclusions which leads me to believe that I am
indefinite - I do not always follow the same path given seemingly
identical stimuli. The conclusions I reach change, given new
circumstances and knowledge and a lot is trial and error or randomness.
This is not too say that it is impossible that we may function using a
definite approach, but using our current thinking and knowledge I cannot
write such a formal method to describe myself.
> This is a contradiction. He [Lucas] states that we can reproduce any piece of
> mind-like behaviour. If this is so, why not produce those parts that enable a
> mind to solve any Goedelian formula (not every part of the mind may be needed
> for this)?
This is an interesting thought. However, the only way to escape the
Goedelian formula is to be outside of the system. Enpassing the new
knowledge within the formal system allows new proofs to be made. This is
all comes back to the idea of enpassing our whole "world" before the
knowledge of humans and machines is the same.
> However complicated a machine we construct, it will, if it is a machine,
> correspond to a formal system, which in turn will be liable to the Goedel
> procedure  for finding a formula unprovable-in-that-system.
I agree with this statement until the formal system is as large as our
own. Again, Lucas is ignoring this eventuality.
> We are trying to produce a model of the mind which is mechanical---which is
> essentially "dead"---but the mind, being in fact "alive", can always go one
> better than any formal, ossified, dead, system can. Thanks to Goedel's theorem,
> the mind always has the last word.
Although we can say we are alive and a machine is dead. So what? When we
look at the results of a computer system, we do not look at the internal
operations, but the conclusions and the thinking (justification) to reach
that conclusion. What does it then matter, whether the result was from a
dead machine or an alive human being?
> The mechanical model must be, in some sense, finite and definite: and then the
> mind can always go one better.
> The human mind may be finite and definite, we just haven't found the limits
I totally accept this point. We haven't discovered or formulised our
limits, but I think we would have to be outside-of-the-system before we
could prove them. Goedel's theorem is totally true, proved by humans
existence - we cannot prove things outside of our own system.
I would say that our minds are finite. We only have a certain number of
neurons in our brains and no more can be created. As for the definite, we
live in a definite universe with definite physical laws which we cannot
break. We obey these laws and therefore we are definite beings.
However, we are so far off from the understanding and appreciation of how
big the universe is and the complexity of it, that to us, it seems
> Goedel's theorem applies to deductive systems, and human beings are not
> confined to making only deductive inferences. Goedel's theorem applies only to
> consistent systems, and one may have doubts about how far it is permissible to
> assume that human beings are consistent.
I agree that some of our reasons and actions are inconsistent, however, to
say we are inconsistent, I believe, is inaccurate. We all make mistakes,
but rarely make the same mistakes twice. We all strive to be consistent.
> ...it has been urged by C.G. Hempel and Hartley Rogers that a fair model of the
> mind would have to allow for the possibility of making non-deductive
> inferences, and these might provide a way of escaping the Goedel result.
> As human's are not confined to making deductive inferences, what if the machine
> doesn't either? Lucas shows that this method will produce not inconsistent
> results, but wrong ones. Hence this method would not be an adequate model for
> the mind.
I disagree. As I have stated, we all make mistakes and we learn from
them. If the machine also corrected mistakes, learning from the
experience, the knowledge gained would be more accurate than before. The
continuation of this learning process could become an adequate model for
> In short, however a machine is designed, it must proceed either at random or
> according to definite rules. In so far as its procedure is random, we cannot
> outsmart it: but its performance is not going to be a convincing parody of
> intelligent behaviour: in so far as its procedure is in accordance with
> definite rules, the Goedel method can be used to produce a formula which the
> machine, according to those rules, cannot assert as true, although we, standing
> outside the system, can see it to be true.
> Again he makes the statement that a human standing outside the system can see
> the Goedelian formula to be true, but the machine cannot. Could it be that
> anything (human or machine) inside the system cannot see it to be true?
Definitely. That is Goedel's theorem. This brings us back to square one!
> Goedel showed in his second theorem---a corollary of his first---it is
> impossible to prove in a consistent system that that system is consistent.
> Therefore, Lucas states that the human and the machine are assumed to be
> consistent, because they decide to be, i.e. that any recognised inconsistencies
> will not be tolerated, and thus retracted.
This is proof of the fact that we cannot prove that we are consistent.
We can't prove it, because we cannot go outside of our system.
> There always remains the possibility of some inconsistency not yet detected.
> At best we can say that the machine is consistent, provided we are.
> [...] are not men inconsistent too? Certainly women are, and politicians; and
> even male non-politicians contradict themselves sometimes, and a single
> inconsistency is enough to make a system inconsistent.
Edwards seems to take this as a personal attack. I don't think it was
meant offensively, but to show that men never seem to understand women.
I have to say that I smiled.
> Human beings, although not perfectly consistent, are not so much inconsistent
> as fallible.
This makes much more sense. We have already said that there are some
things a computer can do better than humans. This is one of them, our
ability to get things wrong, error by miss calculation. Machines do not
have this problem, but they are programmed by humans. The calculations
computers compute will never be wrong, but the program or the input could
> Our inconsistencies are mistakes rather than set policies.
> If we really were inconsistent machines, we should remain content with our
> inconsistencies, and we would happily affirm both halves of a contradiction.
This is very true and humans rarely make such errors. Lucas points out
that if a person is prepared to contradict them self without any qualm,
that person is said to have "lost his mind". It is this "self-correcting"
(thinking and questioning) attribute which currently sets human minds
above machine "minds".
> A fallible but self-correcting machine would still be subject to Goedel's
I agree, but only until the machine's world reaches the same complexity as
> The Goedelian formula refers to itself. Lucas says that in order for a machine
> to evaluate this, it must be self-conscious. Why does this have to be? Can the
> machine not evaluate itself without being self-conscious? The machine could be
> able to know what it is doing (evaluate itself), without being self-conscious
> (knowing it is doing it).
Interesting point. The whole point about computer systems (especially
Expert Systems) is that the system can justify it's own actions. I would
not say those systems have any consciousness.
> no inconsistency once detected will be tolerated. We are determined not to be
> inconsistent, and are resolved to root out inconsistency, should any appear.
Lucas is again pushing the point to try to prove we are consistent beings.
> Lucas says that a machine cannot evaluate the part that is doing the
> evaluating, and hence cannot consider it's own performance, and so cannot
> answer the Goedelian formula.
> A machine can assume that the part that is doing the evaluating has a certain
> performance and can include that in the total evaluation. Therefore it can
> evaluate it's own performance.
> From Turing's argument:
> So far, we have constructed only fairly simple and predictable artefacts. When
> we increase the complexity of our machines there may, perhaps, be surprises in
> store for us. He draws a parallel with a fission pile. Below a certain
> "critical" size, nothing much happens: but above the critical size, the sparks
> begin to fly. So too, perhaps, with brains and machines. [...] Turing is
> suggesting that it is only a matter of complexity, and that above a certain
> level of complexity a qualitative difference appears, so that "super-critical"
> machines will be quite unlike the simple ones hitherto envisaged.
> Turing suggests that when a brain gets to a certain complexity, it becomes
> greater that the sum of it's parts. Which I believe is possible.
I cannot disprove this, but it seems highly unlikely. Lots of objects
have been created by humans which are greater than the sum of their parts,
e.g. Vehicles, boats and houses. However, they still obey all the laws,
they don't do anything we don't expect them to. In just the same way, I
think computer programs, however advanced they become, will never become
greater than the programming put into them. Consciousness and thought are
essential parts of the mind, but I believe they are given by God. Human
hands cannot create a complete human mind with an artificial
consciousness, which is essentially life. I believe that artificial
intelligence will increase greatly in years to come, but any "mind"
created will only be the complexity of programs written by humans, nothing
The problem I have with this paper is that I do not agree with Lucas,
using Goedel's theorem to disprove human ability to create a mind. The
whole idea seems too abstract and incomplete. It seems Lucas didn't think
about the eventuality of a computer system reaching the complexity of the
whole universe. When that target is reached, machines and humans will
have exactly the same knowledge and anything unproveable-in-the-system
will be unprovable by both the machine and the human. However, I do
believe such "mind-making" is out of the reach of human capability due to
humans trying to play God.
> It would begin to have a mind of its own when it was no longer entirely
> predictable and entirely docile, but was capable of doing things which we
> recognized as intelligent, and not just mistakes or random shots, but which we
> had not programmed into it. But then it would cease to be a machine, within the
> meaning of the act.
We cannot write a formal system for the mind and therefore a good model
would have to be unpredictable and docile. Lucas disproves this saying
that it would no longer be a machine. I believe that to get anywhere near
the level of complexity of a human mind, the system would have to be
unpredictable and docile to then become greater than the sum of its parts.
If the machine is entirely predictable, it isn't a very good model for the
mind, as human minds are totally unpredictable.
> What is at stake in the mechanist debate is not how minds are, or might be,
> brought into being, but how they operate. It is essential for the mechanist
> thesis that the mechanical model of the mind shall operate according to
> "mechanical principles", that is, that we can understand the operation of the
> whole in terms of the operations of its parts, and the operation of each part
> either shall be determined by its initial state and the construction of the
> machine, or shall be a random choice between a determinate number of
> determinate operations. If the mechanist produces a machine which is so
> complicated that this ceases to hold good of it, then it is no longer a machine
> for the purposes of our discussion, no matter how it was constructed. We should
> say, rather, that he had created a mind.
Lucas is suggesting that we could build a model of a mind, where we knew
every action and operation of it. If the machine started to infer things,
basically "think for itself", it would become greater than the sum of its
parts and become "a mind". I agree with the thinking in principle, but
believe it is out of reach to build such a complicated machine. Also I
think it is improper to think that the machine will be able to do ANYTHING
that hasn't been programmed into it.
> It seems that the mind is either very complicated, or it is greater that the
> sum of it's parts.
I think it is safe to say our minds are one of the most complicated things
on the planet and that they are far greater than the sums of their parts.
> We should take care to stress that although what was created looked like a
> machine, it was not one really, because it was not just the total of its parts.
> One could not tell what it was going to do merely by knowing the way in which
> it was built up and the initial state of its parts: one could not even tell the
> limits of what it could do, for even when presented with a Goedel-type
> question, it got the answer right. In fact we should say briefly that any
> system which was not floored by the Goedel question was eo ipso not a Turing
> machine, i.e., not a machine within the meaning of the act.
> if a machine were created thus, then it would be a model of the mind. Objective
> achieved. Due to the limitations imposed by Lucas, this would not be a machine
> by his definition.
I don't think we can create anything which is greater than the sum of its parts.
> Consider the following argument:
> The universe and everything in it obeys a set of physical laws.
> A computer can simulate physical laws.
> A human brain (and hence the mind) obeys physical laws from above.
> A computer can model the human brain and hence the mind.
The universe and everything in it obeys laws - we can only define some of
them. There may be more.
A computer can simulate physical laws - only the ones we know. A human
brain obeys these laws. We may be obeying ones we can't define yet and
therefore find it too complicated to understand and cannot formalise them.
A computer can only model the parts of the human brain we understand
completely. No more.
> If the mind does not obey physical laws, then there may be other laws it may
> obey, which could be modelled.
I agree, but stress the word "could". These other laws, which we don't
know exist yet, are not understood and we cannot formulise them for use
in a computer system.
This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:26 GMT