In this paper the author Zenon W. Pylyshyn, is writing about his
Belief in cognitive architecture and algorithms that are involved with
First he introduces the concept of "virtual machines" and "real machines"
> There is also a natural extension of this notion to
> "Virtual machine" architectures, where the latter are architectures
> (such as LISP, PROLOG, OPS5, ACT*, SOAR, and so on) that are simulations
> of some conventional machine in software. However, as Al Newell never
> tires of pointing out, there is no principled distinction between a
> virtual and a real architecture: they are all equally real
> physical architectures.
Here he is stating that when the virtual machine and the real machine are
running there is no difference, it is only when the virtual machine is
turned Off that a distinction can be found - Essentially pulling the plug.
I agree, as once the plug is pulled the Virtual machine no longer
has the ability to perceive. Where as a real machine has no plug unless
deceased - Although who knows what happens then!
> For any particular computational process there is only one level of the
> system's organization that corresponds to what we call its cognitive
> architecture. That is the level at which the states (datastructures)
> being processed receives a cognitive interpretation.
This is a important concept as he specifically says that the for a given
computational process that there is only ONE level of organization that allow
That given process to be interpreted cognitively. In many ways we can say
that there are interlocking states on cognition in the human mind, which are
used to retrieve abstract information.
Next he states:
> The level at which the system is representational, and where the
> representations correspond to the objects of thought (including
> percepts, memories, goals, beliefs, and so on). In other words, the
> semantic interpretation of these states figures in the explanation of
> the cognitive behavior. Notice that there may be many other levels of
> system organization below this, but these do not constitute different
> cognitive architectures because their states do not represent
> cognitive contents. Rather, they correspond to various kinds of
> implementations, perhaps at the level of some abstract neurology, which
> realize (or implement) the cognitive architecture.
>From this we can see that the system uses different levels of representation
To describe the level of perception and knowing things. But from this the
Levels are not representational of differing architectures but of differing
Experiences. From this we can say that the architectures would be similar
For a wide range of cognitive tasks.
> The difference between cognitive architecture and other levels of system
> organization is fundamental; without an independently motivated theory of
> the functional architecture, a computational system cannot purport to be
> a literal model of some cognitive process. There are three important
> reasons for this, which I will try to sketch below.
>From here he says that we have to find a model of the mind before we can have
a accurate model of the processes that run on it. E.g. to model a given
process we have to understand that algorithms that are used to run that process.
> * Architecture-relativity of algorithms and strong equivalence. For
> most cognitive scientists a computational model is intended to
> correspond to the cognitive process being modelled.
We have to first understand the real system to understand the synthetic one we
Are trying to model on the real one. This is a good and obvious point,
to have a real process we must first have the real architecture.
> * Architecture as a theory of cognitive capacity. Another way of
> looking at the role of architecture is as a way of understanding the set
> of possible cognitive processes that are allowed by the structure of
> the brain.
It would be possible to find the total processes and functions
that the brain can do and from there you would have the max bounds of cognitive
process that the brain can do, which is a good model of the system. Again it
is good practice to find the capacity of the system to allow for investigation
for the processes that it may perform.
> Architecture as marking the boundary of representation-governed
> processes. Namely, that the architecture must
> be cognitively impenetrable.
This is a strange idea, as for the boundary to be impenetrable, there must
be no interaction between these cognitive levels. Many processes must surely
use many different levels and areas for a given process.
>From this point he goes onto talk about algorithms and architectures.
> Cognitive algorithms, the central concept
> in computational psychology, are understood as being executed by the
> cognitive architecture. According to the strong realist view, a valid
> cognitive model must execute the same algorithm as that carried out by
> the subject being modelled.
>From this we understand that for a given algorithm to work we must have the
given architecture for the given process, for example using a Chess algorithm
to talk to a person just would not work! We have to use an accurate
algorithm based on a model to be most successful.
He also from this states that the architecture has to be sufficiently complex
for the given problem other the process could not be successfully modelled.
E.g. Using a calculator for a complex speech recognition and understanding just
would not happen!
> The distinction between directly executing an algorithm and executing
> it by first emulating some other functional architecture is crucial to
> cognitive science. It bears on the central question of which aspects of
> the computation can be taken literally as part of the cognitive model
> and which aspects are to be considered as part of the implementation of
> the model (like the colour and materials out of which a physical model
> of the double helix of DNA is built).
>From this we can understand that it is very different to execute a given
algorithm which is rule based, than to implement a process on a system using
a model. E.g. whether we are using a direct model or another fragmented
> This issue frequently arises in connection with claims that certain
> ways of doing intellectual tasks, for example by the use of mental
> imagery, by passes the need for knowledge of certain logical or even
> physical properties of the represented domain, and bypasses the need
> for an inefficient combinatorial process like logical inference.
>From here we can see that, when the mind views something we do not
have to run finite time algorithm to process the given information
recieved, we can understand this immediately, from this we can perceive that
the cognitive architecture has some cognitive method incorporated. In machine
form this would be hard to model without having the same biological
architecture as the mind it is modelling.
> Architecture and Cognitive Capacity Explaining intelligence is
> different from predicting certain particular observed behaviours.
> In order to explain how something works we have to be concerned with
> sets of potential behaviours, most of which might never arise in the
> normal course of events.
We can interpret that we have to have a complete set of potential behaviours
to model the total process efficiently. Otherwise the model would not
react as the mind would for a given process.
>From here the author introduces the idea that for a given process that the
outcome of that process is not solely due to the internal architecture
of the model.
> But how can the behavior of a system not be due to its internal
> construction or its inherent properties? What else could possibly
> explain the regularities it exhibits? It is certainly true that the
> properties of the box determine the totality of its behavioral
> repertoire, or its counter factual set; i.e. its capacity.
He suggests that given the system, the capacity of it, and the processes that it
can perform are based on the architecture but the outcome of those processes
are not solely based on the architecture.
He argues that the systems reactions are based on knowledge that has been
gathered and that the mind does not use a biological method for reasoning,
but uses a rule based system to perceive things.
> What the biological mechanism does provide is a way of
> representing or encoding the relevant knowledge, inference rules,
> decision procedures, and so on -- not the observed regularity itself.--
The idea that we are bound by knowledge is coherent, although some biological
functions must apply as with a new born child has certain abilities to do
things, and must be able to perceive certain things.
> In other words the cognitive penetrability of the observed regularity
> marks it as being knowledge-dependent and as involving reasoning --
> even if one is not aware of such reasoning--taking place. It is within
> the cognitive capacity of the organism to assign a different referent
> to the pronoun, with the new assignment being explicable in terms of
> the same principles that explained the original assignment, namely in
> terms of an inference from general background beliefs. The difference
> between the cases would be attributed to a difference in the state
> of knowledge or belief of the cognizers, and not to a difference in
> their capacity or cognitive architecture.
Given different outcomes from the same architecture and the same process, the
only reasonable conclusion would be that the difference in process would be
from different knowledge representation and inference.
ARCHITECTURE AND THE AUTONOMY OF THE COGNITIVE LEVEL
> The need for an independently-motivated theory of the cognitive
> architecture can also be viewed as arising from the fundamental
> hypothesis that there exists a natural scientific domain of
> representation-governed phenomena (or an autonomous "knowledge-level").
Here he introduces the concept that we have a automatic knowledge level, e.g
if we put our hand into the fire, we automatically pull it away without
thinking to pull the arm away! Basic knowledge and operations are important
in perceiving things and can be seen especially in new born children.
> In general, if we can show that a certain regularity is cognitively
> penetrable we have good reason to believe that it involves reasoning.
> This, in turn, provides strong grounds for assuming that it is
> attributable, at least in part, to the nature of the representations
> and the cognitive processes operating over these them. Thus cognitive
> penetrability is an important methodological tool for determining
> whether certain patterns reflect properties of the architectureor of
> the rational treatment of goals, beliefs, and knowledge -- i.e. of
> decision-theoretic-- considerations.
If something can be shown to be changed, e.g. The way we perceive colours then
it can be said to be based on the knowledge of that system and not on the
automatic knowledge level. It involves reasoning and perceiving the
problem based upon the rules.
Interesting paper - Liked the example of the color filters.
Nick Worrall email@example.com
This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:27 GMT