Re: Chalmers on Computation

From: Cliffe, Owen (oc197@ecs.soton.ac.uk)
Date: Mon Mar 06 2000 - 19:54:56 GMT


> CHALMERS:
> Instead, the central property of computation on which I will focus is
one
> that we have already noted: the fact that a computation provides an
> abstract specification of the causal organization of a system. Causal
> organization is the nexus between computation and cognition. If
cognitive

Indeed as a true cognitive system implements a mind - and a computational
system can at bes implement a model based upon the casual organization of
the mind.

> CHALMERS:
> systems have their mental properties in virtue of their causal
> organization, and if that causal organization can be specified
> computationally, then the thesis of computational sufficiency is
> established.

and if you can show that the casual organization of a mental system
_relies_ upon a domain that cannot be specified computationally then you
would go a long way to proving the converse. (i.e. Penrose)

> CHALMERS:
> Similarly, if it is the causal organization of a system that
> is primarily relevant in the explanation of behavior, then the thesis of
> computational explanation will be established.

> Brooking:
> It is believable that cognitive systems have their mental properties in
> virtue of their causal organization. But can the causal organization be
> specified computationally?

You could always do a pretty good job of modeling the casual organization
of a mental system by taking the mind and modeling it at a very low level
but then you end up the the boundary problem, which seems to be much
harder to come to terms with that having a system that you cannot
adequately specify computationally (a problem you could probably solve
with statistical modeling and numerical analysis techniques)

> Brooking:
> Can we be sure that any such changes are valid? A causal topology has
> been described in the paper as representing "the abstract causal
> organization of the system". In other words, it is "the pattern of
> interaction among parts of the system". It "can be thought of as a
> dynamic topology analogous to the static topology of a graph or
> network". What if the interaction among parts of the system is time
> dependent? By stretching, distorting, expanding or contracting the
> system, this time dependence will probably be disturbed.

With a connectionist model the neural structure of the brain is the
abstract causal topology of the system, i.e. its not very abstract, more
of a copy than an abstract representation

> Brooking:
> What does mentality depend on?

No it depends an the ability to act as if you have a mind, to show
mind-like behavior

> Brooking:
> Does mentality not depend on a
> particular physiochemical make up, as with digestion?

Maybe, but that biochemical (and electrical ) can be included in the
model of the system.

> CHALMERS:
> Assume conscious experience is not organizationally invariant. Then
> there exist systems with the same causal topology but different
> conscious experiences. Let us say this is because the systems are made
> of different materials, such as neurons and silicon [...] Consider
> these [two] systems, N and S, which are identical except in that
> some circuit in one is neural and in the other is silicon.
>
> The key step in the thought-experiment is to take the relevant neural
> circuit in N, and to install alongside it a causally isomorphic silicon
> back-up circuit, with a switch between the two circuits. What happens
when
> we flip the switch? By hypothesis, the system's conscious experiences
will
> change [...]

That assumes that you actually can switch between the two, i.e. that it is
possible to create a device in S that is independent of N that can
actually do the same thing as something in N

> Brooking:
> If all that has been said until now can be taken as truth, then this
> is a perfectly reasoned argument, and it is perfectly reasonable to
> expect a causal difference to be seen when experiences change. However
> this whole argument relies on the fact that the two circuits are
> functionally identical, and I haven't accepted that this will be the
> case after the changes (replacement of neurons with silicon) have been
> made.
yes.

 
> Brooking:
> I have said that I don't believe a system in possession of mentality
> can be captured by a discrete specification, due to time dependence.
> Time dependence can be captured in a discrete system, to an ever
> increasing level of accuracy, so my argument may come down to whether
> we will ever be able to describe a brain in such a way that the way in
> which all of the neurons react is known. I will argue for the first
> sort of challenge given above, as I believe however accurate a
> discrete system can get, it will never be accurate enough.

The same goes for a system's stability. the state of our mind is
intrinsically stable (most of the time) and it maybe true that the stable
running of a brain is dependent upon time and non-discreet variablilty, it
is certainly true that a discreet implementation would differ if that were
case, but it might also but less stable.

> Brooking:
> Suppose that there is a precise time dependence between the neurons in
> the brain. The system described above could simulate a brain,
> neuron-by-neuron, just much slower - if we slow down the operation of
> the brain universally, then it is conceivable that the time dependence
> will not be sacrificed. The system description is still discrete
> however, and hence I would argue that the patterns of interaction
> between the slips of paper would not mirror patterns of interaction
> between neurons in the brain.

Paper is a bad analogue anyway, because you are really treating an
electrical field as the base symbol, the speed invariance is not
important, but because the machine state would effectively be [modeling]
a large, complicated field description concurrency would be paramount. So
events that have to take place concurrently (such as the simultaneous
triggering of one or more neurons) would also have to seem to take place
in the model. so while time stretching is OK, synchronization is an
invariant i.e. different implementations would at least have to seem to
make a a set of transitions in in mental state that would occur
concurrently in a real mind in the same way.

> CHALMERS:
> We have every reason to believe that the low-level laws of physics
> are computable.
> If so, then low-level neurophysiological processes
> can be computationally simulated; it follows that the function of
> the whole brain is computable too, as the brain consists in a
> network of neurophysiological parts. Some have disputed the premise:
> for example, Penrose (1989) has speculated that the effects of
> quantum gravity are noncomputable, and that these effects may play a
> role in cognitive functioning.

> Brooking:
> It could be that the low-level laws of physics are not computable for
> the very same reason that I have argued for mentality not being
> computable. It is reasonable to believe that the effects of quantum
> gravity play a role in cognitive functioning, as cognitive functioning
> involves movement of electrons in the brain.

i don't know anything about quantum gravity, but i see no reason to see
why a given component in a system can't be simulated by observation
sufficiently for it to /seem/ as if it were real.

> CHALMERS:
> There are good reasons to suppose that whether or not cognition in
> the brain is continuous, a discrete framework can capture everything
> important that is going on. To see this, we can note that a discrete
> abstraction can describe and simulate a continuous process to any
> required degree of accuracy. It might be objected that chaotic
> processes can amplify microscopic differences to significant levels.
> Even so, it is implausible that the correct functioning of mental
> processes depends on the precise value of the tenth decimal place of
> analog quantities.

Nonsense, it is perfectly plausible that in the long run this kind of
quantization is really really important,

> CHALMERS:
> The presence of background noise and randomness
> in biological systems implies that such precision would inevitably
> be "washed out" in practice.

But you aren't simulating the presence of noise by doing this you are
adding the property to all values in the system that they will always fall
within the same set of values. take equality:
you have two values (A & B) both real (as in math real),
A certain property of your system depends on the competition of these two
values at different times. In the discreet system they are both mapped to
the same value and in order to simulate the fact that in the real system
these two values are never (OK maybe they are the same, but in a real
system the probability of this is infinitely smaller than in a discreet
system) the same you have to choose randomly between the two. at Step 1
you decide that A is larger than B and at a later time you have to do the
same thing again and you decide that B is larger that A, the consistency
of the system is not preserved as it would be in a real system.

Discreet systems are not the same as real ones, and they cannot be used to
model real ones (in the long run)

> CHALMERS:
> It follows that although a discrete
> simulation may not yield precisely the behavior that a given
> cognitive system produces on a given occasion, it will yield
> plausible behavior that the system might have produced had
> background noise been a little different. This is all that a
> proponent of artificial intelligence need claim.

Not so, the discreet system would act as if it were totally governed by
background noise, and after time that property would show through.

> Brooking:
> So the argument here is that, although a system cannot reproduce the
> exact operation of a given brain, the operation that it does perform
> is still cognitive. I have said that I believe that the precise time
> dependence in a brain is important, and it is fair to say that a
> discrete system trying to implement cognition would have it's own
> precise time dependence.

I agree and i also contend that as well as that the intrinsic
interdependence of such a system,this as well as effects of value
quantization would make the a model intrinsically different from the real
thing, to the extent that it would not preserve enough of the real
properties.

> Brooking:
> Would this then constitute cognition? I don't
> know...

I don't think so, i think you would end up with an inelegant simulation
that would be unstable, and not really cognitive.



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:27 GMT