On Tue, 29 Feb 2000, Brooking, Stephen wrote:
> I would agree with Chalmers' criticism of Searle, although there is some
> validity in arguing that whether a given system can be seen to implement a
> given computation is a matter of interpretation.
We'll soon be getting to Searle's Chinese Room Argument about whether a
computer programme passing the Turing Test (TT) would have a mind. But
Searle's other suggestion -- that it's just a matter of interpretation
whether or not a computer is really running a particular programme -- is
somewhat harder to believe and support.
What do others think? It may be a matter of interpretation what a code
MEANS, but is it also just a matter of interpretation whether it's THAT
code or some other code that's running?
> I would argue strongly
> against behaviour being sufficient for possession of mind. Can
> behaviour not be seen in plants?
Yes, but the argument was not that ANY behaviour is enough; Turing, for
example, it had to be Turing-Test-passing behaviour.
> > CHALMERS:
> > the central property of computation on which I will focus is...
> > that a computation provides an
> > abstract specification of the causal organization of a system. Causal
> > organization is the nexus between computation and cognition. If cognitive
> > systems have their mental properties in virtue of their causal
> > organization, and if that causal organization can be specified
> > computationally, then the thesis of computational sufficiency is
> > established. Similarly, if it is the causal organization of a system that
> > is primarily relevant in the explanation of behavior, then the thesis of
> > computational explanation will be established.
This is indeed Chalmers's main thesis: that what gives any system the
properties it has is its causal organisation, and that any causal
organisation can be "specified computationally." So whatever physical
system (e.g., the brain) has a mind ( = cognition = intelligence), it has
it because of its causal organisation. Now any causal organisation can
be "specified computationally," this one must too.
But what does this mean? Does it or does it not mean that a computer
running the right programme (the one that "specifies computationally"
the right causal structure) will have a mind?
Does a system that specifies a causal structure computationally, have
that causal structure. Part of the causal structure, for example, of an
airplane, is that it is able to lift of the ground and fly. Does a plane
simulator that simulates its causal structure have the causal power to
If so, then the same thing should work for the mind. But if not; if the
simulation "specifies" the causal structure but doesn't actually "have"
it, then that's another story.
What do you think (and why?)
> It is believable that cognitive systems have their mental properties in
> virtue of their causal organization. But can the causal organization be
> specified computationally? The claim is that a computation provides an
> abstract specification of the causal organization of a system. If it
> is abstract, then surely some aspects of the causal organization are
> not going to be specified by the computation.
Very good observation! Perhaps not every causal detail is relevant (some
planes could be black and some white) but surely the fact that it can
soar in the (real) air is relevant. (For if the air's causal structure
is also abstracted, then we're left a bit in the air about what's really
going on here!)
What is the relation between the kind of abstraction that is involved in
simulating causal structure computationally, and the notion of
"implementation-independence" (the hardware/software distinction)?
> > CHALMERS:
> > Call a property P an organizational invariant if it is invariant with
> > respect to causal topology: that is, if any change to the system that
> > preserves the causal topology preserves P. The sort of changes in question
> > include: (a) moving the system in space; (b) stretching, distorting,
> > expanding and contracting the system; (c) replacing sufficiently small
> > parts of the system with parts that perform the same local function (e.g.
> > replacing a neuron with a silicon chip with the same I/O properties); (d)
> > replacing the causal links between parts of a system with other links that
> > preserve the same pattern of dependencies (e.g., we might replace a
> > mechanical link in a telephone exchange with an electrical link); and (e)
> > any other changes that do not alter the pattern of causal interaction
> > among parts of the system.
> Can we be sure that any such changes are valid? A causal topology has
> been described in the paper as representing "the abstract causal
> organization of the system". In other words, it is "the pattern of
> interaction among parts of the system". It "can be thought of as a
> dynamic topology analogous to the static topology of a graph or
> network". What if the interaction among parts of the system is time
> dependent? By stretching, distorting, expanding or contracting the
> system, this time dependence will probably be disturbed.
Right. Or space-dependent: What is the "invariant causal topology" of an
airplane, flying? Can a computer running the right programme have that
causal topology too?
[Another way to put it: is simulating causal structure the same as
having that causal structure?]
> > CHALMERS:
> > Most properties are not organizational invariants. The property of flying
> > is not, for instance: we can move an airplane to the ground while
> > preserving its causal topology, and it will no longer be flying. Digestion
> > is not: if we gradually replace the parts involved in digestion with
> > pieces of metal, while preserving causal patterns, after a while it will
> > no longer be an instance of digestion: no food groups will be broken down,
> > no energy will be extracted, and so on. The property of being tube of
> > toothpaste is not an organizational invariant: if we deform the tube into
> > a sphere, or replace the toothpaste by peanut butter while preserving
> > causal topology, we no longer have a tube of toothpaste.
> Could a similar argument not be put forward against mentality being an
> organizational invariant? If we gradually replace the neurons in a
> brain with silicon chips, while preserving causal patterns, will it
> still perform as before? It would almost definitely be a very powerful
> "computer", but would it still possess mentality?
You almost asked exactly the right question -- but then you took a left
turn into performance! If a computer cannot have all the relevant causal
structure of a plane (hence cannot be a plane), might the mind not be
like a plane? A big difference is that we can SEE (unless we are tricked
by a virtual reality simulation) that a computer simulation of a plane
cannot really fly us to Chicago, but we cannot SEE whether an AI simulation
of a mind (cognition, intelligence) really doesn't have a mind
So is that all there is to it? You can SEE a computer's not really
flying, but you can't SEE a computer's not really thinking?
But Steve Brooking asked here about what the computer can DO, and if it
can really pass the TT, then it can do everything we can do. So
the analogy with the plane fails, because there IS something a plane can
DO that the simulation can't do: It can fly!
So is the TT a fair test, then (out of sight, out of mind): If
it can DO everything a mind can do, then it's captured all the relevant
> > CHALMERS:
> > In general, most properties depend essentially on certain features that
> > are not features of causal topology. Flying depends on height, digestion
> > depends on a particular physiochemical makeup, tubes of toothpaste depend
> > on shape and physiochemical makeup, and so on. Change the features in
> > question enough and the property in question will change, even though
> > causal topology might be preserved throughout.
> What does mentality depend on? Does mentality not depend on a
> particular physiochemical make up, as with digestion?
Again, you asked almost exactly the right question. Last time you veered
off in the last second into behaviour; this time you veered off into
physiology. But the trouble with physiology, is that we have no way of
knowing which physiological properties are RELEVANT to having a mind,
and which are irrelevant (like the blackness vs. whiteness of the
But there really is one (and only one) property that we know is relevant
to having a mind, and when it's there, there's a mind, and when it's
not, then there is not (no matter what the behaviour or the physiology).
Turing mentions this in passing, but only to dismiss it as leading to
"solipsism" and doubts about one another's minds, and the end of the
possibility of saying anything sensible at all on the subject.
What is that property that is neither behavioural nor physiological?
For it will be the mental counterpart of flying (which Chalmers has
conceded is not a computational property).
> surely the electrical
> impulses (in the brain) are heavily reliant on timing. The first
> neuron or group of neurons to react to a certain stimulus will produce
> impulses that spread to further neurons, which in turn will react. It
> is probable, I believe, that the time it takes for impulses to travel
> along different connections to different neurons will affect the order
> in which neurons fire, and hence the reaction of the system.
All true. And it is logically possible that online timing is a critical
property, and cannot be captured by just virtual timing. But there is no
way to know whether it is indeed a critical property for having mental
states, or just as irrelevant as the colour of an airplane.
But it's certainly a possibility; and so it is worth noting now that if
that possibility is in fact really the case, if temporal or spatial
properties are ESSENTIAL to having mental states (cognition,
intelligence), then mental states are not just computational states (and
computationalism is false). For then intelligent systems will
have to be hybrid, computational/noncomputational; perhaps even certain
performance capacities (perhaps even passing the TT!) can only be
generated by a hybrid system.
> > CHALMERS:
> > An exception has to be made for properties that are partly supervenient on
> > states of the environment.
I have to explain this weasel-word, "supervenient." It's a kind of a
cheat. Remember we had considered that thinking might possibly be like
flying (i.e., noncomputational), except you couldn't SEE that, the way
could see with flying. You can't SEE whether someone is really thinking,
really has mental states. What you CAN see is (1) his hardware, (2) his
software and (3) his behaviour. So if a system passes the TT, and really
does happen to have a mind, then that mind is said to be "supervening"
(we may as well call it "piggy-backing") on whatever were the
properties that made the system pass the TT. For example, if the system
was purely computational one, then the mental states would "supervene"
on the (implemented) computational states. (If the system was hybrid,
then it would supervene on whatever combination of hardware and software
properties were the relevant ones for passing the TT.)
If you don't find that the word "supervene" adds anything here
conceptually, then you see it the same way I do.
> > CHALMERS:
> > Such properties include knowledge (if we move a
> > system that knows that P into an environment where P is not true, then it
> > will no longer know that P), and belief, on some construals where the
> > content of a belief depends on environmental context. However, mental
> > properties that depend only on internal (brain) state will be
> > organizational invariants. This is not to say that causal topology is
> > irrelevant to knowledge and belief. It will still capture the internal
> > contribution to those properties - that is, causal topology will
> > contribute as much as the brain contributes. It is just that the
> > environment will also play a role.
> If a system that knows that P is moved into an environment where P is
> not true, does the preceding claim that the system will just forget P?
> Surely a system that truly possesses mentality will know that it knew
> P, but will also know that P is no longer true?
That's a fair question, and this is a philosophical point which any of
you who are not interested can safely ignore, because it has nothing to
do with AI, but this is what it is about:
There is a difference between my BELIEVING that it is raining, and my
KNOWING that it is raining. When I believe it is raining, and it really
is raining, then I know it is raining; otherwise I just believe it's
raining (I think I know, but I don't really know, because it's not true,
except I don't know it's not true).
So if I was in the mental state of "knowing" that it was raining
(because I believed it was raining, and it really was raining), and it
stopped raining (but I hadn't noticed that it had stopped raining),
then my mental state would no longer be that of knowing it was raining.
So although nothing inside my head had actually changed, my mental
state had changed from knowing to believing, when the rain changed from
falling to not falling.
Don't worry about that difference, because it isn't really a mental
difference, but a difference in the relation between what is going on in
your head and what is going on outside of your head that you are
> > CHALMERS:
> > Assume conscious experience is not organizationally invariant. Then
> > there exist systems with the same causal topology but different
> > conscious experiences. Let us say this is because the systems are made
> > of different materials, such as neurons and silicon [...] Consider
> > these [two] systems, N and S, which are identical except in that
> > some circuit in one is neural and in the other is silicon.
> > The key step in the thought-experiment is to take the relevant neural
> > circuit in N, and to install alongside it a causally isomorphic silicon
> > back-up circuit, with a switch between the two circuits. What happens when
> > we flip the switch? By hypothesis, the system's conscious experiences will
> > change [...]
> > But given the assumptions, there is no way for the system to notice these
> > changes. Its causal topology stays constant, so that all of its functional
> > states and behavioral dispositions stay fixed. [...] We might even
> > flip the switch a number of times, so that [...] experiences "dance"
> > before the system's inner eye; it will never notice. This, I take
> > it, is a reductio ad absurdum of the original hypothesis: if one's
> > experiences change, one can potentially notice in a way that makes some
> > causal difference. Therefore the original assumption is false, and
> > phenomenal properties are organizational invariants.
> If all that has been said until now can be taken as truth, then this
> is a perfectly reasoned argument, and it is perfectly reasonable to
> expect a causal difference to be seen when experiences change. However
> this whole argument relies on the fact that the two circuits are
> functionally identical, and I haven't accepted that this will be the
> case after the changes (replacement of neurons with silicon) have been
Chalmers's point relies on the following four things as all really
being the same thing: computationally identical = functionally
identical = causally identical = behaviourally identical. If that were
correct, then mental states would have to be computational states,
because there could not be mental differences without computational
differences. (If something green suddenly started to look red to me, I
could SAY so, and that would be a functional/behavioural/computational
But you are right not to accept that computational = causal. Remember
the plane; it has causal properties that are not computational. The same
could be true of mental states; they could depend on (say) hardware
differences that were not relevant to the computation; or on peripheral
devices that were not even computational; or on parallel/distributed
nets whose parallelism can only be simulated serially by a computer. And
any of those differences could be the ones that mental states actually
Even the power to pass the Turing Test could (like the power to fly)
depend on the noncomputational properties of a hybrid system.
> > CHALMERS:
> > If all this works, it establishes that most mental properties are
> > organizational invariants: any two systems that share their fine-grained
> > causal topology will share their mental properties, modulo the
> > contribution of the environment.
> Having not accepted the argument put forward above, I have to argue
> that most mental properties are not organizational invariants, and
> further that any two systems that share causal topology will not share
> their mental properties.
I think you are right, but only if what you mean is that there are
causal properties (e.g., flying) that are not computational -- and
thinking could be such a property too.
> > CHALMERS:
> > To establish the thesis of computational sufficiency, all we need to do
> > now is establish that organizational invariants are fixed by some
> > computational structure. This is quite straightforward.
> > An organizationally invariant property depends only on some pattern of
> > causal interaction between parts of the system. Given such a pattern, we
> > can straightforwardly abstract it into a CSA description: the parts of the
> > system will correspond to elements of the CSA state-vector, and the
> > patterns of interaction will be expressed in the state-transition rules.
> > [...] Any system that implements this CSA will share the causal
> > topology of the original system. [...]
> > If what has gone before is correct, this establishes the thesis of
> > computational sufficiency, and therefore the the view that Searle has
> > called "strong artificial intelligence": that there exists some
> > computation such that any implementation of the computation possesses
> > mentality. The fine-grained causal topology of a brain can be specified as
> > a CSA. Any implementation of that CSA will share that causal topology, and
> > therefore will share organizationally invariant mental properties that
> > arise from the brain.
> This argument relies on the fact that mentality is an organizational
> invariant, and also relies on implementation independence. I have not
> accepted either of these facts, and have argued against them. Both of
> these are linked to my belief that there is a lot more to mentality
> than the discrete functionality of a system that possesses it. A
> system which possesses mentality cannot be expressed as a set of
> discrete states and transitions between them. I believe that there is
> a time dependence that cannot be captured by this representation.
Although the critical noncomputational property on which mentality
depends may not happen to be timing, you are quite right that timing is
a noncomputational property, and it and many others could be the ones
that mentality supervenes on, instead of supervening on just
> > CHALMERS:
> > A computational basis for cognition can be challenged in two ways. The
> > first sort of challenge argues that computation cannot do what cognition
> > does: that a computational simulation might not even reproduce human
> > behavioral capacities, for instance, perhaps because the causal structure
> > in human cognition goes beyond what a computational description can
> > provide. The second concedes that computation might capture the
> > capacities, but argues that more is required for true mentality.
> I have said that I don't believe a system in possession of mentality
> can be captured by a discrete specification, due to time dependence.
> Time dependence can be captured in a discrete system, to an ever
> increasing level of accuracy, so my argument may come down to whether
> we will ever be able to describe a brain in such a way that the way in
> which all of the neurons react is known. I will argue for the first
> sort of challenge given above, as I believe however accurate a
> discrete system can get, it will never be accurate enough.
That is a logical possibility (and it is good enough to refute
Chalmers's claim that it couldn't be so), but of course you have not
actually given reasons to believe that timing is in reality critical for
mentality -- or that virtual timing couldn't accomplish the same thing.
(You also seem to be conflating questions of timing -- which can be
perfectly discrete -- with questions of continuity, which are not
peculiar to time.)
> > CHALMERS:
> > The question about whether a computational model simulates or replicates a
> > given property comes down to the question of whether or not the property
> > is an organizational invariant. The property of being a hurricane is
> > obviously not an organizational invariant, for instance, as it is
> > essential to the very notion of hurricanehood that wind and air be
> > involved. The same goes for properties such as digestion and temperature,
> > for which specific physical elements play a defining role. There is no
> > such obvious objection to the organizational invariance of cognition, so
> > the cases are disanalogous, and indeed, I have argued above that for
> > mental properties, organizational invariance actually holds. It follows
> > that a model that is computationally equivalent to a mind will itself be a
> > mind.
> There is no obvious objection to the organizational invariance of
> cognition, but I still have an objection, as I have expressed earlier.
Here's an obvious objection. Minds FEEL. Who says feelings (or the
physical properties on which they supervene) are not more like
hurricanes, wind and air than they are like implementation-independent
> > CHALMERS:
> > The Chinese room. There is not room here to deal with Searle's famous
> > Chinese room argument in detail. I note, however, that the account I have
> > given supports the "Systems reply", according to which the entire system
> > understands Chinese even if the homunculus doing the simulating does not.
> > Say the overall system is simulating a brain, neuron-by-neuron. Then like
> > any implementation, it will share important causal organization with the
> > brain. In particular, if there is a symbol for every neuron, then the
> > patterns of interaction between slips of paper bearing those symbols will
> > mirror patterns of interaction between neurons in the brain, and so on.
We will defer the discussion of the Chinese Room Argument till next
week, when you all read it!
> Suppose that there is a precise time dependence between the neurons in
> the brain. The system described above could simulate a brain,
> neuron-by-neuron, just much slower - if we slow down the operation of
> the brain universally, then it is conceivable that the time dependence
> will not be sacrificed. The system description is still discrete
> however, and hence I would argue that the patterns of interaction
> between the slips of paper would not mirror patterns of interaction
> between neurons in the brain.
This bypasses Searle's argument (which is based on first supposing that
a computer could pass the TT, and then showing that it would still lack
mental states). You are going on the guess that real-timing is critical (for
mental states, or even to pass TT?).
> > CHALMERS:
> > We have every reason to believe that the low-level laws of physics
> > are computable. If so, then low-level neurophysiological processes
> > can be computationally simulated; it follows that the function of
> > the whole brain is computable too, as the brain consists in a
> > network of neurophysiological parts. Some have disputed the premise:
> > for example, Penrose (1989) has speculated that the effects of
> > quantum gravity are noncomputable, and that these effects may play a
> > role in cognitive functioning.
We don't have to go to quantum gravity: HEAT is noncomputational, and
feelings could (say) supervene on brain-temperature!
> It could be that the low-level laws of physics are not computable for
> the very same reason that I have argued for mentality not being
> computable. It is reasonable to believe that the effects of quantum
> gravity play a role in cognitive functioning, as cognitive functioning
> involves movement of electrons in the brain.
Maybe. But it's a desperate measure, for electrons could by exactly the
same token be critical to liver or heart function (and they're not)....
> > CHALMERS:
> > There are good reasons to suppose that whether or not cognition in
> > the brain is continuous, a discrete framework can capture everything
> > important that is going on. To see this, we can note that a discrete
> > abstraction can describe and simulate a continuous process to any
> > required degree of accuracy. It might be objected that chaotic
> > processes can amplify microscopic differences to significant levels.
> > Even so, it is implausible that the correct functioning of mental
> > processes depends on the precise value of the tenth decimal place of
> > analog quantities. The presence of background noise and randomness
> > in biological systems implies that such precision would inevitably
> > be "washed out" in practice. It follows that although a discrete
> > simulation may not yield precisely the behavior that a given
> > cognitive system produces on a given occasion, it will yield
> > plausible behavior that the system might have produced had
> > background noise been a little different. This is all that a
> > proponent of artificial intelligence need claim.
> I accept that a discrete simulation would be accurate to a certain
> degree, but due to the high connectivity of neurons in the brain,
> small differences could cause large differences as reactions spread
> from neuron to neuron. The argument that background noise could "wash
> out" precision in practice is a good one, and I don't have an argument
> against it. I do object to Chalmers talking of randomness, directly
> after he talked of chaotic behaviour, and I don't believe that
> randomness has a place in biological systems, or in any system for that
You might want to look a little at the capabilities of probabilistic
automata. They sometimes get around NP-time problems...
> > CHALMERS:
> > It follows that these considerations do not count against the theses of
> > computational sufficiency or of computational explanation. To see the
> > first, note that a discrete simulation can replicate everything essential
> > to cognitive functioning, for the reasons above, even though it may not
> > duplicate every last detail of a given episode of cognition.
> > To see the second, note that for similar reasons the precise values
> > of analog quantities cannot be relevant to the explanation of our
> > cognitive capacities, and that a discrete description can do the
> > job.
The irrelevance of analog, spatial, temporal, chaotic, probabilistic and
other noncomputational properties to mental states (or even to passing
the TT) certainly has not been shown!
This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:26 GMT