In this paper the author is writing about Cognitive Architecture and its
role in theories of intelligent processing.
First he goes on to define what is cognitive architecture and then goes
on to discuss intelligent processing (known as cognitive).
> There is also a natural extension of this notion to
> "virtual machine" architectures, where the latter are architectures
> (such as LISP, PROLOG, OPS5, ACT*, SOAR, and so on) that are simulated
> on some conventional machine in software. However, as Al Newell never
> tires of pointing out, there is no principled distinction between a
> virtual and a real architecture: they are all equally real physical
> architectures. The only different between them may come down to
> something like the theoretically irrelevant fact that after the power
> has been turned off (or the system reset) some machines do revert to a
> different architecture,
Here, he is stating that both a virtual and real architectures are
equally the same with no differences, I do not agree with this point as
if different languages are used to implement say 1 algorithm, won't they
all behave slightly differently, i.e. Prolog undergoes back-propagation
C is executed sequentially.
This leads to a difference in which the information is perceived when
these algorithms are executed, and so therefor may lead to different
> For any particular computational process there is only one level of the
> system's organization that corresponds to what we call its cognitive
> architecture. That is the level at which the states (data structures)
> being processed receive a cognitive interpretation. To put it another
> way, the level at which the system is representational, and where the
> representations correspond to the objects of thought (including
> percepts, memories, goals, beliefs, and so on).
What he means by this is that all architectures that undergo
a computational process has one level of organisation, i.e. internal
representation that is the same, and so hence has the same meaning, in
which information has been processed.
> I will argue that for purposes of cognitive science, the difference
> between cognitive architecture and other levels of system organization
> is fundamental; without an independently motivated theory of the
> functional architecture, a computational system cannot purport to be
> a literal model of some cognitive process.
This is an important point, as the other levels of system organisation
makes up the cognitive representation, so therefor the states of the
other lower levels of system organisation will have influence on the
states that are higher up.
> * Architecture-relativity of algorithms and strong equivalence. For
> most cognitive scientists a computational model is intended to
> correspond to the cognitive process being modeled at what might roughly
> be characterized as the level of the algorithm (this view of the proper
> level of correspondence is what I refer to as "strong equivalence").
> Yet we cannot specify an algorithm without first making assumptions
> about the architecture: algorithms are relativized to architectures.
For a computational model to be designed, the relevant algorithm has to
first be designed and its purpose identified, for this to take place
the primary architecture must be defined.
> * Architecture as a theory of cognitive capacity. Another way of
> looking at the role of architecture is as a way of understanding the
> of possible cognitive processes that are allowed by the structure of
> the brain. This means that to specify the cognitive architecture is to
> provide a theory of the cognitive capacity of an organism.
Here he states that modelling the brain structure will gives us valuable
information/insight/understanding of how to design an cognitive
architecture but is this feasible as no one knows the full structure of
the brain, will they ever know and its full working.
> * Architecture as marking the boundary of representation-governed
> processes. Finally, formany of us, a fundamental working hypothesis of
> Cognitive Science is that there exists an autonomous (or at least
> partially autonomous) domain of phenomena that can be [FOOTNOTE 2]
> explained in terms of representations (goals, beliefs, knowledge,
> perceptions, etc) and algorithmic processes that operate over these
There is a knowledge based system that can take algorithms as input and
process them, the outcome depends on the system, for example if you have
two identical algorithms and they were processed on two different
machines the likely hood that the outcome (representation) will be the
same are slim.
> Cognitive algorithms, the central concept
> in computational psychology, are understood asbeing executed by the
> cognitive architecture. According to the strong realist view, a valid
> cognitive model must execute the same algorithm as that carried out by
> the subject beingmodeled. But it turns out that which algorithms can
> carried out in a direct way depends on the architecture of the machine
> in question. Machines with different architectures cannot in
> generaly directly execute the same algorithms.
Algorithms are specific to the same architectures, it is stated that one
algorithm on one architecture can not be successfully executed on
another architecture, for example if a Java code is compiled using a
Unix architecture, then its compiled classes can not be run successfully
by a windows architecture, as there internal interpretation are
different(i.e. little Endian big endian).
> The distinction between directly executing an algorithm and executing
> it by first emulatingsome other functional architecture is crucial to
> cognitive science. It bears on the central question of which aspects of
> the computation can be taken literally as part of the cognitive model
> andwhich aspects are to be considered as part of the implementation of
> the model (like the color and materials out of which a physical model
> of the double helix of DNA is built). We naturallyexpect that we shall
> have to have ways of implementing primitive cognitive operations in
> computers, and that the details of how this is done may have no
> empirical content.
Here he distinguishes between executing an algorithm straight away and
executing an algorithm by first emulating the architecture in which the
algorithm will be executed on, and then executing the algorithm, in
essence the execution of the algorithm is broken down into sub
components. Where each of the different components could either
represents the model of information process or implements the model.
> From the point of view of cognitive science it is important to be
> explicit about why a model works the way it does, and to independently
> justify the crucial assumptions about the cognitive architecture. That
> is, it is important for the use of computational models as part of
> anexplanation, rather than merely in order to mimic some performance,
> that we not take certain architectural features for granted simply
> because they happen to be available in our computerlanguage. We must
> first explicitly acknowledge that we are making certain assumptions
> about the cognitive architecture, and then we must attempt to
> empirically motivate and justify suchassumptions. Otherwise important
> features of our model may be left resting on adventitious and
> unmotivated assumptions.
He states that it is best to learn and make assumptions of a
computational model and how it works rather than using them to emulate
It is best to understand the workings of the model rather than to
understand it emulating something else.
If this can be achieved, then we can come closer to modelling the mind.
> This issue frequently arises in connection with claims that certain
> ways of doing intellectualtasks, for example by the use of mental
> imagery, bypasses the need for knowledge of certain logical or even
> physical properties of the represented domain, and bypasses the need
> for an inefficient combinatorial process like logical inference. The
> proposal is often stated in terms of the hypothesis that one or
> mental function is carried out by an "analogue" process. Fromthe
> present perspective this would be interpreted as the claim that some
> cognitive function was actually part of the cognitive architecture.
When we process a mental image in our minds, it is done instantly, there
are no time delays in which information is being fetched executed and
processed, if this was to be implemented in a computer model algorithms
would have to have the power in which they can instantly process
something, without having any time delays associated with them, at the
present moment there is no computer language that can handle this. But
if the design some how incorporated information processing functions
that are used to aid the actual information processing architecture then
this can be feasible.
> Architecture and Cognitive Capacity Explaining intelligence is
> different from predicting certain particular observed behaviors.
> Inorder to explain how something works we have to be concerned with
> sets of potential behaviors, most of which might never arise in the
> normal course of events. Such a potential (orcounterfactual) set
> constitutes the organism's cognitive capacity. In order to make this
> simple point more concrete, consider the following oversimplified
> example from my book (Pylyshyn,1984b).
The key issue here, is for implementing the mind or to find out more
information on how the mind works, we have to examine the different
behaviours at different states that the mind can be in, this is known a
the set of potential behaviours. This is very difficult and possibly
impossible, can we actually find all the behavioural states that a mind
can be in, it may be infinite who knows.
> But how can the behavior of a system not be due to its internal
> construction or its inherentproperties? What else could possibly
> explain the regularities it exhibits? It is certainly true thatthe
> properties of the box determine the totality of its behavioral
> repertoire, or its counterfactualset; i.e. its capacity. But as long as
> we have only sampled some limited subset of this repertoire (say, what
> it "typically" or "normally" does) we may not be in any position to
> infer what itsintrinsically constrained capacity is, hence the observed
> regularity may tell us nothing about the internal structure or inherent
> properties of the device. It is easy to be misled by a sample of
> asystem's behavior into assuming the wrong sample space or
> counterfactual set
The behaviour of a system does not have to essentially come from with in
itself i.e. its internal data structure(the architecture of the system)
there are other factors that can cause its
behaviour to change, such the environment in which the system is in.
We our selves act differently in different environments which in turn
have effects on our internal structure i.e. our minds
> Again it is an empirical question, though this time it seems much more
> likely that aknowledge-level ("tacit" knowledge, to be sure)
> explanation will be the correct one. The reason for this is that it
> seems likely that the way colors mix in one's image will depend on what
> one4knows about the regularities of perceptual color mixing -- after
> all, we can make our image of a certain region be whatever color we
> want it to be!
Here he states that the mind can infer some image/information straight
away instead of having to process that information, it is said it comes
from knowledge, for a architure to be able to do this it has to follow a
set of governed rules, which will have to be defined, for example
when this colour and that colour are mixed they will produced this
For architecture to be as good as the mind it will have to process
information just like a real mind does in a biological way.
> In other words the cognitive penetrability of the observed regularity
> marks it as being knowledge-dependent and as involving reasoning --
> even if one is not aware of such reasoning--taking place. It is within
> the cognitive capacity of the organism to assign a different referent
> to the pronoun, with the new assignment being explicable in terms of
> the same principles thatexplained the original assignment, namely in
> terms of an inference from general background beliefs. The difference
> between the cases would be attributed to a difference in the state
> ofknowledge or belief of the cognizers, and not to a difference in
> their capacity or cognitive architecture.
Information perceived from the same object will be processed differently
by different knowledge based systems i.e. this can be said that
knowledge is dependent. This does make a valid point, as different minds
will obviously perceive thing differently and hence their outcome may be
If computers that have different knowledge based architectures (i.e.
their knowledge represented and implemented differently), then there
could be an inconsistency between computers.
> This is often a straightforward criterion to apply in practice. In
> order to determine whethercertain observed regularities favor a
> particular hypothesized architectural property, we carry out
> experiments to see whether the regularities in question can be
> systematically (and rationally)altered by changing subjects' goals or
> beliefs. If they can, then this suggests that the phenomena do not tell
> us about the architecture, but rather they tell us about some
> representation-governedprocess; something which, in other words, would
> remain true even if the architecture were different from that
If the regularities of the architecture can be changed, so that they
yield different outcomes to when they were not changed then this states
that the architecture has no influence on the outcome of process but the
internal structure does (data structure). This shows that the cognitive
complexity makes a major contribution to all processes. The way it
This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:27 GMT