Mind in Society: Where the Action Is?



Paul F. Secord (Ed.)

Explaining Human Behavior: Consciousness, Human Action and Social Structure

Beverly Hills, CA: Sage, 1982. 320 pp.



 Review by

Stevan Harnad


Paul F. Secord is professor of psychology and education at the University of Houston. He is coauthor of Too Many Wan? The Sex Ratio Question with M. Guttentag. Stevan Harnad is founder and editor of The Behavioral and Brain Sciences (Princeton, New Jersey). He contributed the chapter "Neoconstructivism: A Unifying Theme for the Cognitive Sciences "to T. W. Simon and R. J. Schole's Language, Mind and Brain.



I have some doubts about the unity of this volume, but let the reader judge from the sample that follows.

In his chapter titled "Consciousness, Charles Taylor suggests that the traditional mind/body, mental/physical dichotomy is an undesirable legacy of the seventeenth century. Its faults are that it gives rise to a dualism that must then be resolved in various unsatisfactory ways. The most prevalent of these ways is currently "functionalism," which explains cognition in terms of functional states and processes like those of a computer and "marginalizes" (i.e., minimizes or denies completely the causal role of) consciousness. The alternative, "interactionism," gives due weight to consciousness but at the cost of adding an independent domain to the physical one, namely, the mental, and possibly tampering indeterminately with physics thereby.

Taylor will have none of this wrong-headed dichotomy and its undesirable sequelae. From his background in political theory he proposes to carve the world instead in terms of "agents" versus "nonagents." Machines not only lack minds (in the terms of the old dichotomy), but more important, they lack the "significance feature," or as Taylor more recently puts it they lack "mattering"- Nothing matters to them. Only of (human) agents does it make sense to say that something "matters" to them or that something has a certain "significance" for them. Hence it is only human agents that can be said to have goals. A machine's "goals" (e.g., to heat, more, build, calculate, solve) are really just our goals, the goals of its builders, users, or interpreters, who are all human agents.

Our goals, on the other hand, are intrinsic to ourselves. Taylor accordingly proposes this agent/nonagent dichotomy (and its corresponding dichotomy of acts that intrinsically matter versus acts that do not matter or only have significance that is derived from what matters to an agent) to replace the dualism derived from the seventeenth century. Its virtue is that it is a dichotomy within the physical world rather than between some physical and mental world. It returns consciousness from the marginal position of being an unreliable source of information (about the surrounding physical world and even about its own physical substrate), one that is causally superfluous in a complete physical theory, to the central position of being an important source of our understanding of what it is that mattes to us, hence a contributor to its significance.

Does this new dichotomy do the job of ridding us of the dualistic legacy of the traditional mind/body dichotomy? Hardly. For not only are we left with an unexamined property that all "agents" have, namely, the capacity to produce acts with intrinsic significance (on which consciousness somehow piggybacks)- so far this is good news neither to the biologist nor to the psychologist nor is it informative to the sociologist-but we are also left with the rather vexatious "agent/nonagent problem": What etities are agents and what entities are not? and Why? and How can you tell them apart?

We are meant to derive relief from dualism by trading on this crisp, new, in-the-world dichotomy. But it somehow seems to be parasitic on the old one; and where the old one leaves a relatively clear, though troublesome, dualism to contend with, the new dichotomy seems to offer obscurity or obfuscation: What makes an agent an agent? In my own case, I know. It is my subjective sense of purpose, significance, intention (in fact, any subjective sense at all will probably do); in other words, it is my consciousness, my mind. What about other cases? Your agency? I believe in it through the worrisome leap represented by the "other minds" problem (in the old dichotomy, but what is it here?). Animal agency? A slightly bigger leap, Machine agency? Current machines are too far to be reached by the leap, but future machines? Robots that pass the Turing test for indistinguishability from people? Why won't I want to "attribute" agency and intrinsic significance to them? Because of crystal-clear intuitions or knowledge concerning what sets biological organisms apart? Hardly. Because of compelling a priori considerations concerning agency? None that I've heard to date. Welcome to the seventeenth century, Professor Taylor.

Stephen Toulmin's chapter, "The Cenealogy of 'Consciousness,'" attempts to wave off the mind/body problem etymologically and historically, looking at how the word consciousness became substantivized and evolved from its prior meaning of "joint thinking by several individuals." There are references to Donne, Montaigne, and the Narcissism of Descartes' age and beyond. I must confess that I do not see how philology can be expected to solve the problems of philosophy; in fact, if I did not know the author's distinguished reputation, I would have adjudged the chapter yet another instance of semiotic silliness or hermeneutic humbug. Perhaps the sociopolitical use of consciousness ("political consciousness," "consciousness raising") is a gratuitous source of confusion, giving to representatives of different disciplines the illusory impression of common cause. (This seems to be one of the weaknesses of the present volume.) In any case, consciousness might indeed have a social dimension-it could conceivably have evolved in a social, communicative context, and for interactive purposes-but even if this is the case, Toulmin's etymological analysis does not seem to add to our understanding, probably because the particular historical period under scrutiny is millennia too late.

John P. Sarbini and Maury Silver's chapter, "Some Senses of Subjective," passes over the classical problem of subjectivity ("Sense 8: Cartesian Subjectivity...[a]s social psychologists, we have little to say about this sense" pp. 77-78) and does not seem to put Tom Nagel's views on "point of view" to especially good use or to cast any new light on "relativism"-but that's only a point of view of course.

Rom Harre's chaptar, "Psychological Dimensions," expands the "Cartesian" dichotomy into what appears to be an arbitrary three-dimensional space of "continua" (private/public, individual/ collective, penonal/social;"further dimensions could be created aux choix," p. 100). The taxonomizing is questionable; it feels just about as lapidary as the semantic diffrential or higher order personality factors, but without even the benefit of a statistical analysis.

Justin Leiber's chapter "Characteristics of Language," attempts to generalize the analogy/homology distineflon in comparative biology to the Turing-machine-equivalence issue in cognitive science. It seems to me he gets it wrong. By analogy with a universal Turing machine that simulates all other Turing machines, Leiber proposes a "universal" (why "universal'?) "locomotive machine" that simulates all motion. The point of the example is supposed to be that such a machine would provide only an analogy to animate motion, not a homology, and hence that the same is true of Turing machines vis-a-vis "psychological states." The problem is with Leiber's analogy itself: The locomotive machine (never mind the "univerality" question; that is a red herring) is a Turing machine-At least the "solipsist" version, with only a machine table (a program), is. The "naturalist" version is a kind of robot, which leaves completely untouched the question of whether its (unspecified) inner workings are "homologus" or only "analogous" to a machine table. None of this appears to me to cast any light on the psychological reality of Turing equivalence. The mation/cognition analogy implicit in all of this seems misleading because although motion can be formalized, it is not intrinsically formal, whereas thought and language, although perhaps not merely formal, are at least partially so intrinsically. Hence I do not find that this illuminates the chimpanzee language issue, to which Leiber applies it ("Washoe has language analogicaily... [but] lacks language homologically;" p. 123), or the issue of "modularity" (Leiber urges us to think in terms of homologous "modules" rather than analogous "functions").

I had difficulty knowing what to make of Peter T. Manicas's " The Human Sciences: A Radical Separation of Psychology and the Social Sciences." Manicas seems to be very enthusiastic about "open" systems. He takes psychology to be "neuropsychology" (without considering the standard functionalist objections) and views its goal as not the "explanation" of behavior but the provision of a ("generative") account of our competence, our "powers." So far this sounds like some vaguely familiar Chomskian line, although cut of context and without rigorous support. But then the author adds his final stricture, which leaves one entirely perplexed, for this generative neuropsychological account must be based on "patterns and tendencies," not "constant conjunctions." One is at an utter loss to understand what a pattern or tendency might be other than some form of constant (or frequent) conjunction, either in time or in space.

The conclusion of D. W. Hamlyn's chapter, " The Concept of Social Reality," namely, that reality is not a social construction-not just a set of concepts agreed on by people-is surely right. (Of course, the foregoing is just another way of indicating that one shares the author's realist position.) But Hamlyn's (Wittgensteinian?) supporting argument, that other (human?) beings must exist in order for anyone to have concepts at all, is not obviously right. Why can the requisite "correction" (feedback, constraints) for concept formation come only from other beings? Why couldn't it come directly from the environment (i.e., reality)? It is not clear, for example, where the concept "other beings" or "correctors" would come from in Hamiyn's account. (The necessity for consciousness in all this is also debatable; and what distinguishes it all from a mindless, purely causal story is part of the mind/ body problem.) Yet our concepts are always constructive and underdetermined. So "limits" may indeed be all that reality imposes.

There are several other chapters, but there is no space to review them here. The editor, Paul Secord, has furnished a good overview of each contribution in his introduction. The chapters I have discussed are those with some bearing on the problems of cognition and consciousness in the usual form that psychologist and philosophers regard as relevant and significant. Other chapters (and other parts of the chapters I have reviewed) are more concerned with social questions, whose pertinence to the problems of consciousness or vice versa is less clear to me. If this volume is assessed as a concerted contrribution to the study of consciousness, then apart from a few ideas here and there, I am afraid that it does not furnish much. If the theme is taken to be the explanation of human behavior, then it seems to offer even less. If there is a substantive social theme that cuts meaningfully across both of these problems, then it has escaped this reviewer's consciousness.