Fodor (2000) reviews and criticizes what he calls the New Synthesis
literature (mostly Cosmides and Tooby 1992, Pinker 1997, Plotkin 1997).
He coins the term New Synthesis to name this approach (which is better
known as Evolutionary Psychology), because he claims that what these
authors do is simply to forge together four great tennets of todays
cognitive science by establishing necessary logical connections between
them. These four tennets are Chomskys nativism, Turings
computationalism, Darwins evolution and Fodors modularism. The whole of
Fodors book is thus a complex and sophisticated analysis and refutation
of the cited works and the argumentations therein.
Since the reasoning of the book is quite complex, I will onlz outline
the main points below (trying to follow the logic of the book): In
Fodors interpretation, New Synthesis endorses nativism, and to be able
to explain the causal force of these mechanisms, it is also obliged to
suppose that these processes are computational.
> Mental process are sensitive solely to the syntax of mental
> representations (because mental processes are computations).
> Syntactic properties of mental representations are ipso facto
> essential (because the syntactic properties
> of any representation are ipso facto essential). Conclusion: Mental
> processes are ipso facto insensitive to context
> dependent properties of mental representations. And this is where
> the trouble starts. For it would seem that, as a matter of fact,
> this conclusion isn't true; as a matter of
> fact, there are context-dependent determinants of the causal roles
> of mental representations in at least some cognitive
Although Fodors explanation about why nativism and computation should
go hand-in-hand is not very clear I think, the problem of mental
causality is not unknown. So lets admit that there is a necessary link
here. Once this is accepted, the quoted passage highlights a very
important problem that massively computational theories do meet. There
are global, holistic or abductional operations of the mind that cannot
be explained in terms of strictly local computationalism. Determinig
simplicity is a good example (Fodor also provides others), since what
counts as simple or simpler than always depends on the context. Whether
a proposition is the simplest one in a theory can only be determined by
comparison with all the other propositions of the theory.
The authors of the New Synthesis might claim that the property that
looks global on one level of abstraction may be transcribed as a local
syntactic property if a higher level of represetation is adopted:
instead of the level of individual propositions, that of whole
theories. That is true, says Fodor, but this way the idea that the mind
is Turing- computational loses its strength and psychological
> Whole theories can't be the units computation any more than they
> can be the units of confirmation, or of assertion, or of semantic
> evaluation.  Indeed, the totality of one's epistemic commitments
> is vastly too large a space to have to search whatever it is that
> one is trying to figure out.
All the more so, since if we take into account what philosophy of
science (Polnyi) and psychology tells us about implicit knowledge, all
mental activities, even the seemingly most explicit ones, would always
involve constant shifts from explicit to implicit knowledge (given the
fact that whole theories and epistemic commitments always include an
implicit part). This would be an unwanted consequence.
Fodor then points out that the problem of abduction/globality for New
Synthesis and cogn. science in general is not only philosophical in
nature, it also has practical implications.
> The theory that mental processes are syntactic gets it right about
> logical form having causal powers; but, in the course of doing so,
> it makes mental causation local, and that can't be true in the
> general case. For example, the failure of artificial intelligence
> to produce successful simulations of routine commonsense cognitive
> competences is notorious, not to say scandalous. We still don't
> have the fabled machine that can make breakfast without burning
> down the house; or the one that can translate everyday English into
> everyday Italian; or the one that can summarize texts; or even the
> one that can learn anything much except statistical
I also find practical failures rather telling.
Next, Fodor shows the weaknesses of two popular solutions to the
abduction/globality problem: heuristics and connectionism. Skipping
Fodors short comment on heuristics, I focus on his treatment of
> In particular, the standard current alternative to Turing
> architecture, namely, connectionist networks, is simply hopeless.
> Here, as so often elsewhere, networks contrive to make the worst of
> both worlds. They notoriously can't do what Turing architectures
> can, namely, provide a plausible account of the causal consequences
> of logical form. But they also can't do what Turing architectures
> can't, namely, provide a plausible account of abductive inference.
>  Connectionists have a notorious problem reconciling the way that
> they individuate nodes with patent truths about the productivity,
> systematicity, and compositionality of typical cognitive systems.
> On one hand, all these phenomena appear to depend on complex mental
> representations being constructed from recurrent parts in different
> arrangements; but on the other hand, network architectures haven't
> any way to say that representations can have recurrent parts, for
> example, that "John loves Mary" and "Mary loves John" do.
I think Fodor dismisses connectionsim too easily. What he says about
connectionist networks not being able to capture recurrent patterns is
simply not true. If he wishes to take a closer look at some of Elman et
als work in connectionist modelling of language (e.g. the model that
can segment the input acoustic string into individual words, the one
that learns English past tenses or the one that can learn grammatical
categories), he will see that these models do seize something of what
is regular in language and language acquisition (exactly what he cites
as an example, e.g. John loves Mary and Mary loves John). Of course,
these regularities and recurrent patterns are not symbolic, but
statistical that, however, is another concern.
Fodor goes on:
> So, assuming that cognitive processes are sensitive exclusively to
> local syntax, how does Classical psychology recover the fact that
> the same belief may have different centrality in different
> theories? Nobody knows. Well, the present point is that if
> Classical models aren't able to answer this question, networks
> aren't even able to ask it. For, to repeat, the type-individuation
> conditions that network architectures afford are incompatible with
> a node's being identified transtheoretically.
I am not much of an expert in networks, but from what I have seen in
models, it is not nodes that you have to identify and match
transtheoretically to capture a generalization. So even though Fodor
may be right about the fact that the corresponding nodes cannot be
identified between networks, this is not relevant here, since (at least
in a lot of networks) there is no one-to-one correspondance between a
node and a symbol, or to put it differently, it is not single nodes
that instanciate meanings. Rather it is in activity patterns or in the
state space of networks that the above mentioned generalizations are
represented. So at least in some domains I know of, networks are
capable of performing what Turing machines do.
So far, Fodor has shown that massive computationalism of the New
Synthesis is wrong. Now he goes on to show that the problem with
computationalism forces the New Synthesis to adopt massive modularity,
which is just as problematic:
> "How does the New Synthesis commitment to modularity connect with
> the New Synthesis commitment to a computational theory of mind?" 
> [O]n at least some views of cognition, the architecture of the mind
> is modular; and, on at least one understanding of what a module is,
> modular processes are ipso facto local. Or, anyhow, relatively
> local. If that's right, there are morals that might be derived,
> depending on how much of cognition is modular: i. If none of it is,
> skip this chapter and the next. ii. If only part of it is, then a
> reasonable research strategy might concentrate on that part until
> somebody has a good idea about abduction. iii. If most or all of it
> is, then something is badly wrong with my claim that abduction is a
> deep and pervasive problem for cognitive science. Call the idea
> that most or all of cognition is modular the "massive modularity"
> thesis (MM).
The above passage illustrates very well Fodors main point in his book.
His conclusion will be what is formulated in ii, above. Let's see how
he refutes MM:
> Modules are informationally encapsulated by definition. And,
> likewise by definition, the more encapsulated the informational
> resources to which a computational mechanism has access, the less
> the character of its operations is sensitive to global properties
> of belief systems. Thus, to the extent that the information
> accessible to a device is architecturally constrained to a
> proprietary database, it won't have a frame problem.  A modular
> problem-solving mechanism doesn't have to worry about that sort of
> thing because, in point of architecture, only what's in its
> database can be in the frame.
I tend to agree that there is a close connection between the
computational nature and the modularity of a model/theory. Modularity
can spare the model from having to run through the totality of its
contents, thus it helps to avoid the abduction/globality problems.
Therefore, computationalists may want adopt modularity. However, even
if they do so, claims Fodor, globality keeps coming back; this time, it
takes the form of the input problem. That is, what mechanism assign to
the modules their domain-specific input? It there is a mechanism that
considers all the inputs and then decides to which module to assign
them to, then we have a mechanism that is less modular than the other
modules (in the worst case, this mechanism is sensory input, but then
we are not rationalists any more), so MM does not hold.
> [E]ach modular computational mechanism presupposes computational
> mechanisms less modular than itself, so there's a sense in which
> the idea of a massively modular architecture is self-defeating.
Another possibility is to have a separate mechanism for each module,
one that is just as specific as the module itself, but what decides the
input of that mechanism then? Another mechanism? This leads to infinite
Fodor rather convincingly shows here that New Synthesis is in a vicious
circle: they espouse massive modularity to save computationailsm from
abduction, but then globality comes back in the form of the input
problem. It seems to me that however hard you may try, you cannot make
it with computation (and massive modularity) alone. There is much more
to the mind than that.
Finally, after refuting the usual arguments for adaptations, Fodor
shows why Darwinism is such an indispensable ingredient in the New
Synthesis cognitive cake.
> [T]he success of a creature's modular computations depends on the
> satisfaction of "natural constraints," or on assumptions of
> "ecological validity," and that these depend, in turn, on
> contingent regularities that hold reliably in the creature's
> environment.  So a question arises: what is supposed to account
> for the ecological validity of such innate beliefs? As far as I can
> see, the answer has to be that they are the products of
> evolutionary selection. The internal connection between the massive
> modularity thesis and psychological Darwinism now becomes
> apparent. 
> How does phylogeny ensure that what the module believes is
> generally true? [D]e facto, natural selection is the only candidate
> if the module is innate.
I find this logic quite interesting. Even if it may seem somewhat
exaggerated to state that the reason why New Synthesis authors also
endorse Darwinism is because they are simply forced to as a logical
consequence of their previous commitments, I think Fodor does succeed
in showing the intrinsic connection between modularity/nativism and
adaptationism. [Just to illustrate that his claims are not pure
fiction, I cite a case from the cognitive literature. Keil et al. 1998
(Two dogmas of conceptual empiricism Cognition: 65, 103-135) proceed in
exactly the same manner: they argue for a hybrid (both associative and
explanatory) model of concept formation and acquisition, and one of
their arguments for the explanatory component is that during their
evolution, humans met natural kinds and to be able to distinguish
effectively between them (thus to have better chances of survival),
they had to develop a mechanism that attends not just to any feature of
natural kinds, but only to the relevant ones (that explain category
If we accept Fodors line of reasoning about the relation between
innateness, modularity and adaptation, then there is an interesting
and, to my knowledge, quite original implication for language and other
> However, in the language case, in contrast to the others, the
> answer does not need to invoke an instructional mechanism by whose
> operation contingent facts about the world can shape the content of
> the creature's beliefs. The reason, of course, is that the facts
> that make a speaker/hearer's innate beliefs about the universals of
> language true (or false) aren't facts about the world; they're
> facts about the minds of the creature's conspecifics.
The important point about this is not that language or the theory of
mind are not adaptations, but rather that they may not be adaptations.
Faculties of the mind that are linked not to the physical, but to the
social world do not have to be anchorred or grounded by natural
selection. That is what makes Chomskys story any feasible.
Fodors conclusion is thus that the New Synthesis authors optimism is
unjustified and far-fetched.
> It is a mystery, not just a problem, how mental processes could
> be simultaneously feasible and abductive and mechanical. Indeed, I
> think that, as things now stand, this and consciousness look to be
> the ultimate mysteries about the mind. Which is, after all, only to
> say that we're currently lacking some fundamental ideas about
> cognition No doubt somebody will have them sooner or later, and
> progress will ensue. Till then, I think we're well advised to plug
> on at the problems about the mind that we do know how to think
> about. Fortunately, it appears that there are interesting, though
> peripheral, parts of the mind that are modular, even if there are
> also more interesting and less peripheral parts of the mind that
Thus far, I have been rather sympathetic to Fodors intricate and
sophisticated argumentation. I do think that we have every reason to
believe that New Synthesis is wrong about massive modularity and
computationalism. Practical problems do arise, as illustrated by
artificial intelligence, and most of the theoretical considerations and
problems outlined by Fodor are serious defects of this theory.
There are, however, two point, the second more important than the
first, in which I do not agree with Fodor:
first, the chain of reasoning he presents (from nativism through
computation and modularity to natural selection) may be
characteristic to some authors of evolutionary psychology, but, to
be sure, it is not pervasive in cognitive science. Most studies
and theories about the mind do not simply embrase them all without
further support. In most of the cases, independent evidence is
provided for all of endorsed theories. Or, to put it differently,
if two claims are supported by independent evidence, the fact that
they are interconnected or logically imply one another does not
second, and more important, Fodors conclusion (i.e. that we should
only work on what we can probably solve with our present knowledge)
simply does not follow from the fact that there is a lot we do not
know. On the contrary: answers to questions usually come from
actively thinking about them, not from ignoring them. To do the
latter would disregard the most interesting and important
scientific questions, it would simply miss the point. It is good to
know that the mind doesnt work that way, but it would be even
better to know how it does.
This archive was generated by hypermail 2b30 : Wed Jun 13 2001 - 18:38:04 BST