Sunday, December 13. 2015
No, "Jew Laws" is not the least bit ungrammatical: It faithfully expresses both the meaning and the flavor of the odious Hungarian original. Like many features of the extremely adaptable, accepting (but sometimes awkward and ungainly) English language, noun strings work (though grammarians -- always the ultimate losers when it comes to language evolution -- solemnly advise against them, sometimes for stylistic reasons, sometimes because they -- the strings, that is -- can be ambiguous). Newspaper headline writers, military jargonauts, technical manual scribblers and hate speech mongers love ’em.
Thursday, April 19. 2012
As a Canadian and British citizen (left-leaning), I am sentimentally and aesthetically a royalist, as long as the royals conduct themselves in a way that is aesthetically and ethically positive (and not too much public money is spent on them).
A monarch with a historic pedigree can, like a flag or soil, be a more palpable symbol for a populace to identify with, and take heart in (especially in hard times) than an appointed or elected figurehead. (My native Hungary's current presidential fiasco is a case in point.)
But nothing excuses a monarch who is old enough to know better from going out and wantonly shooting elephants (whether or not his people are groaning under a heavy financial yoke).
If the British royals ever did anything like that today, I would immediately become a republican.
(I'm rather afraid that if I took a closer look at current royals' domestic hunting, I might already become as anti-sovereignist, with reason, as the separatists [ironically calling themselves "sovereignists"] among my fellow-quebeckers already are, without reason, alongside Canadian and Australian republicans, likewise without reason.)
Monday, December 12. 2011
Commentary on Dan Dennett [DD] (2011) "Whole-Body Apoptosis and the Meanings of Lives" On the Human.
DD: “...which would you prefer for your last months on earth: being struck by lightning at some point before you began losing your faculties, or an indefinitely long period of decline, during which you would gradually become unable to perform the simple actions of life and participate meaningfully in conversation or decision-making?”More options: How about choosing the moment on the fly, based on your condition and prognosis in real time? Or pre-specifying the objective medical criteria on the basis of which you would like it to be decided when the euthanasia should be done (while you’re asleep)? (But wouldn’t many people prefer to be able to say goodbye?)
DD: “We could arrange to have a human body switch itself off quite abruptly and painlessly at a time to be determined. ”Why apoptosis? Nocturnal barbiturate administration would do the trick, if the criteria were objective and reliably (and verifiably) followed. (Apoptosis just adds a needless further layer of sci-fi on the problem, which is merely to determine when the euthanasia should take place. Surely it’s better to make that decision an objective criterion-based one rather than simply an a-priori time-based one?)
DD: “Almost nobody would want to know to a near certainty the exact day and hour of their death, and the reasons why are made vivid in any number of death-row dramas. ”Are death-row inmates (whether guilty or innocent) representative of the rest of humanity? Surely there are many people and reasons for wanting to know when. And even if one does not know, the question is whether the cut-off should be actuarial clock-triggered or substantive criterion-triggered. (Determining the right criterion – or, for that matter, the right chronometry – are another matter, and that’s probably where the real substantive issues reside.)
DD: “We install in every human being and in every subsequent human embryo a system that ensures the swift, painless death at some randomly determined time between the age of 85 and 90... ”Why between 85 and 90, or between any t1 and t2? Is the idea that the interval is an absolute, universal one, that fits all of humankind? If not, then surely it is what other factors (e.g., health, performance capacity, desire to stay alive...) determine the individuals’ time-window that matter far more than keeping the moment within the time-window unpredictable.
DD: “How do we balance the increase of suffering against the non-suffering lives of a few?”By basing the euthanasia point on individual, objective criteria, not a-priori timing.
DD: “If you would prefer to die by lightning bolt while you are still effective and healthy, the price you must be willing to pay is foregoing some years or months that would have been just as effective and healthy as your last days.”There’s decline and there’s decline. Some people would put up with some bodily deterioration as long as they remained mentally sharp. None of these things is predictable, for an individual, from a-priori timing alone. That’s just population statistics, and if we’re to be treated according to those, then we may as well not see doctors when we are ill: just type in our age and symptoms, or perhaps just our age (so we can be treated automatically for its most common illness)!
DD: “...just because we could arrange to live to be 100 (or 120!) we really have no right to use up so much more than our fair share of the world’s resources and amenities.”If we are to reason along those lines, it’s not just our right to live out our years that must be subordinated to the rest of the planet’s needs, but what we have a right to whilst we’re alive (and others are wanting). There’s much to be said for (and against) thinking along these general lines, but it has another name than euthanasia or apoptosis. And the management of how long people are entitled to live will be far less consequential than the management of other entitlements (such as wealth, property, reproduction rights, and perhaps even how we spend our days and use our capacities).
DD: “One of the most interesting objections I have provoked in recent discussions is the suggestion that this policy, if adopted, would rob us of precious opportunities to prove our strength by enduring suffering.”That objection conflates the question of euthanasia itself with pre-timed pop-off. And it is an example of one of the most sordid and sociopathic justifications for withholding euthanasia (reminiscent of an equally noxious credally based one): Let them suffer for the good of their souls (and mine).
Well, fine, in the cases where “their” and “mine” are co-referential. (In other words, where I’m the one who decides I’d rather stick around and suffer.) But that’s the luxury problem, while there are so many who would rather not stay around and suffer, but are not allowed or able to do anything about it.
DD: “...we could use technology to fine-tune the system, to monitor various plausible measures of quality of life in both individuals and populations, so that apoptosis could more optimally track actual mean rates of decline or even rates of decline in individuals so that apoptosis could be customized in any of a dozen ways.”Better still, once we’ve figured out a better way of “customizing” the hour, forget about the apoptosis and just use nocturnally administered barbiturates...
DD: “We should pause to take seriously — very seriously — the prospect of protecting some aspects of our lives and deaths from management, and thereby reframing our landscape of decisions.”And actuarially pre-planned apoptotic obsolescence is a protection from management?
DD: “Why should we devote so much of our R&D budget to finding ways of extending life?”That’s an entirely different matter, completely independent of pre-planned obsolescence.
DD: “...the prospect of being able to live out your remaining days relatively confident that your survivors will not have to set aside memories of a pathetic decline in order to get to the memories of you that matter. What would you trade for that? I’d trade any number of years over 85...(I am 65 as I write this). ”Any updates on this view, now that another half-decade has gone by? The criteria for such decisions are of course personal matters, but I, for one, think the world would be far, far better off with Dan Dennetts staying around as long as they are compos mentis, than by doing them at an appointed age so as to stretch strained old-age pensions one epsilon further. If we’re going to contemplate sci-fi fantasies, looking for a way to engineer apoptosis for pre-programmed death seems to me far less to the point than looking for a way to convince people to limit reproduction, become vegans, and convert to a sustainable way of living.
But Dan’s essay can also be taken to be addressing a far more important and urgent matter than pre-programmed pop-off, namely, euthanasia itself. The worst thing about the status quo on death now is the fact that most people cannot choose to die, even when they wish to. Surely before we can have consensus on pre-programmed pop-off for all (whether or not they want it) we must first agree to allow those who do want to die, now, to do so. Yes, there are complications and risks of abuse that need to be taken carefully into account, but the current status quo is cruel, unjust, and irrational.
Pill-Popping vs. Apoptosis: I think that neither (1) one uniform pre-selected cut-off interval for the life of all human beings nor (2) a preference for being taken by surprise within that interval ("Unexpected-Hanging-Paradox"-style) is for everyone. Or even for most or many -- but this could only be settled by a survey (or perhaps not even that, since people don't always do what they claim they would do...)
Euthanasia itself, however, should (with due precautions against thought disorder and abuse) be available to those who wish it.
So the main point of disagreement is the basis for selecting the time-window for terminating a life. Once that interval is settled, then, if you don't want to know exactly when the grim one will reap, you can start taking randomized, pre-coded pills before going to sleep (5 years' worth, daily, if you like), all of them sugar except the one fateful (double-blind-coded) barbiturate.
The second point of disagreement is the no-back-out condition. I think that's rather harsh, arbitrary, and non-optimal. So I don't mind that the mortal can decline to take the pills altogether if he changes his mind. The rest is the argument against pre-determining the time-window actuarially for the entire population rather than basing it on individual criteria and wishes.
Compassion and Complacency, Sympathy and Sociopathy
Laws are rational base-camps on the slippery slopes of life
Thursday, September 15. 2011
Re: "Physicists in tune with neurons"
My guess is that you could predict consonance/dissonance without recording neuronal activity. It's already in the physics: consonant sounds share more harmonics, bottom up. You could measure that without neurons, just a device that can detect differences in the harmonic spectrum. (And it would be trivial to make neural devices mirror the same property.)
Besides, consonant/dissonant does not correspond to aesthetically "pleasant/unpleasant" (and the right aesthetic adjective is not quite the word "pleasant" anyway): Some of the most excruciatingly beautiful harmonic moments are dissonant ones. (It has more to do with the drawing out or manipulation of expectation in the passage from dissonant to consonant -- but that too is a trivialization...)
(As happens so often: take an absolutely trivial empirical correlation, and make one of its correlates our own precious brain activity, and people are almost superstitiously ready to marvel, the same way they do at their own horoscopes, when they seem to fit...)
And, of course, having detected the physical difference, you're left with the usual (hard) problem, which is not why one feels pleasant and the other not, but why any of it feels like anything at all...
Tuesday, June 21. 2011
(1) Is feeling/nonfeeling an all-or-none distinction?
The answer is most definitely yes. (But the question is not about whether I'm feeling this or that, nor about whether I am feeling more or less. It is about whether I am feeling at all. I can feel a little tired, say, half-tired, but I can't half-feel -- any more than I can half-move [or one can be a little bit pregnant]).)
(2) Is believing a feeling (and if so, what's my evidence that that's true)?
The answer is most definitely yes, and the evidence is of precisely the same kind as the evidence that seeing -- or hearing or smelling or hurting -- is feeling. There's something it feels like to smell roses, and when you're smelling carnations -- or onions -- it feels different. In exactly the same way (but more subtly), there's something it feels like to be believing that it's Tuesday today, and something different it feels like to be believing it's Wednesday (and not just the sound of the words it takes to say one or the other). Every JND of difference in mental space feels different. That's what makes mental states mental, and how we tell different mental states apart: Otherwise I wouldn't know whether or not I was believing it's Tuesday any more than I would know whether or not I was in pain. (Knowing is feeling too!)
Aside: None of this has anything to do with Zombies (and I have next to nothing to do with or say about Zombies). But just for the sake of logical coherence: A zombie would be a lookalike that behaved and talked indistinguishably from us, but did not feel. It could not be believing it felt, because believing is feeling! It would merely be behaving (and speaking) exactly as if it were feeling (and believing, and believing it was feeling).
I consider such a possibility so far-fetched and arbitrary as to be absurd, so I never base any argument on the possibility that there could be such a thing.
However, I do point out that we can no more explain how and why there could not be Zombies than we can explain how or why we feel (the "hard problem"). Zombies are absurd because all the evidence is against them: All the entities that behave as if they feel are in fact, like us, biological organisms that feel. We don't know how or why we all feel, but we do know that we invariably do. The speculation that this invariance could be broken -- with entities acting exactly as if they felt, but not feeling a thing -- is as far-fetched as imagining a universe in which apples fell up rather than down, or the 2nd law of thermodynamics was the reverse. Not only can nothing interesting, one way or the other, be derived from such idle suppositions, but -- and this is most important -- even the correct supposition that Zombies are impossible does not do anything whatsoever toward solving the hard problem (of explaining how and why they are impossible, which is equivalent to explaining how and why we feel, rather than just do).
The statement that "believing is seeing" is no less supported, I should think, than "hurting is feeling": I can't do much more than ostension and appealing to what I am pretty confident is our fundamentally similar mental lives in either case. (I did make a bit of a supporting argument about JNDs just now. The gist is that the only thing that distinguishes mental states is that they feel different: Otherwise what makes them not the same mental state? The fact that they may be followed by different behavioral dispositions won't do the trick, because the states are now, not later, so later divergence in behavioral dispositions still doesn't distinguish the mental states now, when I'm having them. (My knowledge that I believe it's Tuesday today and that I don't believe it's Wednesday cannot come from what I am inclined to do later -- unless, of course, it feels different to be inclined to do this rather than that -- which would be fine with me; that still leaves the difference between beliefs as a difference in what they feel like...)
Excerpts from Doug Hofstadter's "I Am a Strange Loop":
Semantic Quibbling in Universe ZI completely agree that this is incoherent -- simply because believing is feeling. What Chalmers should have said is that the Zombie behaves and talks exactly as if he was feeling (including believing, and believing that he was feeling) but in fact he was feeling (and hence believing) nothing.
Well, what bothers me here is the uncritical willingness to say that this utterly feelingless Dave believes certain things, and that it even believes them sincerely. Isn¹t sincere belief a variety of feeling? Do the gears in a Ferris wheel sincerely believe anything? I would hope you would say no. Does the float-ball in a flush toilet sincerely believe anything? Once again, I would hope you would say no.I feel sincerely in agreement, and would add only that it is not only a sincere or passionate belief that is felt, but also a phlegmatic, quotidial belief, such as it's Tuesday today.
And of course all those mechanical devices don't feel.
And of course talk of Zombies that are like us on the outside and like the Ferris wheel on the inside is nonsense.
So suppose we backed off on the sincerity bit, and merely said that Universe Z¹s Dave believes the falsities that it is uttering about its enjoyment of this and that. Well, once again, could it not be argued that belief is a kind of feeling? I¹m not going to make the argument here, because that¹s not my point. My point is that, like so many distinctions in this complex world of ours, the apparent distinction between phenomena that do involve feelings and phenomena that do not is anything but black and white.I would and do argue the point that believing is feeling.
But I completely deny the point that the difference between feeling and non-feeling is a matter of degree! It's all or none.
The quality and intensity of the feeling may differ (the latter in degree), but whether there is feeling going on at all is not a matter of degree (though feeling be may be flickering, intermittently on/off). In particular, there is nothing (except degrees of doing-power) in between a Ferris wheel, that feels nothing, and, say, an amphioxus which, even if all it can feel is "ouch," is fully one of us sentients.
(I also think that near-threshold phenomenology and psychophysics -- did I feel something or didn't I? -- is irrelevant to all this, but if one insists on citing it: Feeling is instantaneous. In the instant, you feel what you feel (if you are awake and sentient at all). If the source is a stimulus, it is irrelevant that you are uncertain near-threshold: you are not uncertain about what you felt. You felt whatever you felt. You are uncertain whether what you felt was the stimulation you were supposed to be detecting -- whether it was external, from a near-threshold "beep" or endogenous: did I just feel the aura of an impending migraine?).
If I asked you to write down a list of terms that slide gradually from fully emotional and sentient to fully emotionless and unsentient, I think you could probably quite easily do so.Not me. I could rank intensity, maybe even quality, by degrees, but not whether a feeling is felt! That's an all or none divide and on the other side of it is not an unfelt feeling, but nothing but unfelt doing (a Ferris Wheel). Again, near-threshold judgments about a particular external or internal stimulus by a feeling person are irrelevant here. They are feeling;and we are just fussing over what they are feeling, not over whether they are feeling at all: that's an all-or-none matter.
In fact, let¹s give it a quick try right here. Here are a few verbs that come to my mind, listed roughly in descending order of emotionality and sentience: agonize, exult, suffer, enjoy, desire, listen, hear, taste, perceive, notice, consider, reason, argue, claim, believe, remember, forget, know, calculate, utter, register, react, bounce, turn, move, stop.If I'm awake, doing every one of those things feels like something -- agonizing as much as tasting or considering or knowing; only quality and intensity differs.
And of course that includes moving (if it is voluntary and I am not anesthetized).
I won't claim that my extremely short list of verbs is impeccably ordered; I simply threw it together in an attempt to show that there is unquestionably a spectrum, a set of shades of gray, concerning words that do and that do not suggest the presence of feelings behind the scenes.There are spectra of feeling quality and feeling quantity, but an all-or-none divide between feeling and nonfeeling. No continuum from me to the Ferris wheel (except doing). And that's the [hard] problem: doings: easy; feelings: hard...
The tricky question then is: Which of these verbs (and comparable adjectives, adverbs, nouns, pronouns, etc.) would we be willing to apply to Dave¹s zombie twin in Universe Z? Is there some precise cutoff line beyond which certain words are disallowed? Who would determine that cutoff line?No tricks at all. If there could be a Zombie, it would have to be feeling nothing at all, just doing, not feeling. But supposing that an unfeeling ramified Ferris Wheel could be doing what we are doing now -- namely, discussing feeling, mutually intelligibly -- is pure fantasy.
To put this in perspective, consider the criteria that we effortlessly apply (I first wrote "unconsciously", but then I thought that that was a strange word choice, in these circumstances!) when we watch the antics of the humanoid robots R2-D2 and C-3PO in Star Wars. When one of them acts fearful and tries to flee in what strike us as appropriate circumstances, are we not justified in applying the adjective "frightened"?I think most people's intuitions about cinematic robots are incoherent. They do and don't believe that they feel. Nothing hangs on such incoherent notions. Here's the real test: If the robot were real, would they feel compunctions about kicking it? (I think they would, if the robot was sufficiently like us -- just as they are with animals. Below, Doug seems to agree too.)
Here's a piece -- not much longer than this excerpt from Doug's book -- addressing this very issue. Punchline: you get out of a fictional robot whatever the author purports to put into it. If it is decreed, however incoherently, that the robot behaves just as if it feels, but it doesn't. Then so be it. If it is decreed (as in the Spielberg movie) that it does feel, well then it does. Same for decrees that it flies, it can read minds, it can see into the future, it can change the past, it can redesign the universe, square circles, disprove Goedel's theorem -- in fiction, anything goes...
Harnad, S. (2001) Spielberg's AI: Another Cuddly No-Brainer.
Or would we need to have obtained some kind of word-usage permit in advance, granted only when the universe that forms the backdrop to the actions in question is a universe imbued with élan mental? And how is this "scientific" fact about a universe to be determined?No word-usage-permits for "feeling": In fiction, go with the flow. In the real world, your mind-reading instincts (along with common sense and the invariant correlation of feeling with organism-like doings) will be your guide, whether you like it or not. (And, of course, you can't be 100% sure in any case but your own.)
"Science" has nothing to do with it -- except maybe if you're wondering about someone in a coma...
And feeling itself is the élan mental -- the trouble is, we don't know how and why it happens (and, by my lights, we never will, because of limits on the power of causal explanation in any but a counterfactual psychokinetic universe, where feeling really is a causal "force" -- but that's not our universe).
If viewers of a space-adventure movie were "scientifically" informed at the movie's start that the saga to follow takes place in a universe completely unlike ours ‹ namely, in a universe without a drop of élan mental ‹ would they then watch with utter indifference as some cute-looking robot, rather like R2-D2 or C-3PO (take your pick), got hacked into little tiny pieces by a larger robot?Of course not: Fiction can dictate our premises, but not our conclusions...
Would parents tell their sobbing children, "Hush now, don't you bawl! That silly robot wasn't alive! The makers of the movie told us at the start that the universe where it lived doesn't have creatures with feelings! Not one!" What's the difference between being alive and living? And more importantly, what merits being sobbed over?You're asking moral questions, and you're right to. It is only the existence of feeling that makes morality matter at all. And of course we alas have many psychopathic tendencies, not to mention sadistic ones. I don't know if it's parents or experiences or genes that cause some people to be indifferent to or even to enjoy pain in others, but it happens.
But none of this affective evocativeness changes the basic facts: Whether or not an entity feels is all-or-none,
And all mental states (including believing) are felt states: that's what makes them "mental." Otherwise they'd just be states, tout court, as in a ferris wheel or a float-ball in a flush toilet...
Monday, January 3. 2011
Commentary on: Savage-Rumbaugh, S. (2011) Human Language — Human Consciousness On the Human. January 2011There is much to agree with in Sue Savage-Rumbaugh's reflections on human and nonhuman primates. Sue has probably spent more real time rearing and observing our closest hominoid cousins than any other human being has done. Bonobos are indeed astonishingly intelligent and capable, and become still more human-like when reared in daily contact with humans.
But there is one radical inference Sue makes that it will be hard for most people to agree with: Bonobos have acquired a ("kind of") language: "the kind of language they have acquired — even if they have not manifested all major components yet — is human language as you and I speak it and know it."
Let us reflect for a moment on languages and kinds: Humans have many kinds of languages, but there is one thing all those languages have in common: Anything you can say in one of them, you can say in any of the others. And anything and everything that can be said at all can be said in any one of them. Not necessarily in the same number of words (and you might have to define a few new ones); not necessarily equally elegantly; but anything and everything.
(Some readers may find the foregoing assertion as hard to agree with as Sue's that bonobos have language. I suggest they test their intuitions by finding a counter-example: either a human language in which you can say this, but not that; or something you cannot say in any human language. Until someone comes up with such a counter-example, I will provisionally take it to be a true property of language -- not human language, but language itself -- that if you have it, you can say anything and everything that can be said [or gestured or written, as the modality need not, of course, be vocal], and if not, not.)
Neither Kanzi nor his kin or kind can say everything (or anything faintly near everything). I accordingly conclude that they cannot say anything. They can do a lot -- far more than anyone ever imagined nonhuman primates could do. And what they can do includes an astonishing amount of intelligent, purposive communication with humans, using some of the same components to communicate that humans use for language: They can communicate purposively by sending and receiving computer images as well as by responding to human spoken sounds. But the undeniable fact is that -- no matter how much linguistic understanding we attribute to them -- they cannot enter into this "conversation" we are having in this Forum, not even into a rudimentary approximation to it, whereas any speaking human being, using any (spoken or gestural or written) language, can; even a child.
And the most likely reason for that is that bonobos cannot express or understand propositions as propositions (statements with a truth-value: true or false), otherwise they could express and understand any and every proposition; and what they do understand and express when we think they are understanding propositions is not what we think it is. The "narrative" gloss that we project on it is more like the sound-track of a silent movie -- one generated by our own language-prepared brains, irresistibly "narratizing" (as Julian Jaynes dubbed it) every scene we see, but especially every communicative interaction with another mind (and sometimes even, frustrated, with malfunctioning machines). We are inadvertently projecting propositionality even where it is absent.
(This is not merely about "aboutness" in the sense Sue intends it -- not just about the intended object or "referent" of attention, shared attention, pointing, gesturing, or miming; it is about making and meaning subject/predicate assertions with truth values. For that is what gives language its unbounded expressive power, allowing us to express any and every proposition. Nor does that have anything to do with "consciousness," i.e., feelings, which bonobos, and of course most -- probably all -- animals have; nor with the "self/other" distinction, which many species can make, to varying degrees, in the practical, sensorimotor sense, but none but ourselves can make in the linguistic sense.)
It is hard to understand why creatures as stunningly intelligent and capable as bonobos cannot acquire language. I'd say that that inability was a more remarkable and puzzling fact -- begging to be understood and explained -- than even the remarkable intellectual and communicative feats that bononbos have indeed proved capable of mastering; for of course it is precisely how very much they can do that makes what they can't do all the more perplexing: Why can't they say anything and everything, given what they can demonstrably do, if it's really language?
Sue's reply is: "cultural differences"; and with Teco she's hoping to close the cultural gap. But with any human child, the gap is closed almost immediately, in infancy, once the child acquires (any) natural language. (Some unnatural languages can be designed that defy the child's language-learning capacities, but that's another matter; even those artificial languages still have the full expressive power of any natural language.)
So until Teco can join this conversation, I will assume that what is going on is a good deal of hopeful, irresistible propositional over-interpretation (by humans) of some remarkable cognitive and communicative capabilities and performance (by bonobos) -- but not a conversation, not propositions, and hence not language.
Harnad, S, (2010) From Sensorimotor Categories and Pantomime to Grounded Symbols and Propositions. In: Handbook of Language Evolution, Oxford University Press.
____. (2010) Symbol Grounding and the Origin of Language: From Show to Tell. In: Origins of Language. Cognitive Sciences Institute. Université du Québec à Montréal, June 2010.
Sunday, December 9. 2007
Please don't be frightened off by the symbols; they are made fearsome for a purpose: Suppose we have a hundred things. These can all be physical objects, or words, or speech sounds. Now suppose we sort them into (say) two categories, A and non-A, based on three, two-valued (+/-) properties, X, Y, Z. The properties could be natural ones (+/-solid, +/-edible, +/-pronounceable) or social (+/-kosher, +/-english, +/-posh). Each thing can be described by its value on the three properties (e.g., +-+). Let's say that to be in category A you need to have a + on property X, otherwise you are in category non-A.
Now I hope that this exercise has left you a little lost in a bunch of meaningless formal symbols. So even if you followed well enough to be able to tell me that a thing that was -++ would be a non-A, you would still have little idea of what the things, or the properties or the categories were. This would be true even if you spent years categorizing examples I fed you, in the form of "Would a -+- be an A or a non-A?"
This is an example of the arbitrariness of symbols (which is what A/non-A, X/Y/Z and +/- are here). Words too are symbols. Whether a mushroom is edible or not is not a symbol, but its name "A" and the names of its properties are. Saussure is best known for stressing the arbitrariness of symbols, but apparently that was already well known from Scottish sources before his time. Saussure also had synesthesia, which means, for example, that for him vowels had a smell, and this helped him see (or feel or taste) associations between words and objects that most of us do not see. He perhaps thought that such associations somehow provided a bridge between the arbitrary shape of symbols and the natural shape of the things that symbols signify.
But Saussure's main contribution, which he derived from his English lineage (via Mill from Hamilton) was the view that (what we would today call) cognition is "differential": it is somehow based upon encoding differences in terms of the kinds of +/- properties illustrated above. This led to structuralism. We don't see things as absolutes. We see them in terms of a network of formal contrasts. An A is an A because it is +X. The "representation" of a thing then becomes the set of +/- values on its properties.
This is all fine as far as it goes, but there is a problem: Just as my behaviour could very well be described as categorizing when I used the rule "All and only A's are +X" to reply to questions like "Would a -+- be an A or a non-A?" I could do that task till doomsday without ever knowing what an A or an X was, and with no way to recognize one if I saw it. This is called the "symbol grounding problem." Today, cognitive science tends more toward computationalism than structuralism, but both approaches are insufficient to explain cognition, and for much the same reason: Because arbitrary symbols -- whether part of a structural diagram, or a computational algorithm, or, for that matter, an English sentence, are merely (as the philosopher John Searle calls them) "squiggles and squoggles." Their connections with the things they signify are parasitic on the meanings in our heads, and what we have in our heads is definitely not just more squiggles and squoggles.
To ground symbols, to put concrete flesh on their arbitrary bones, be they ever so systematically structured, the symbol system first has to have the direct sensorimotor capacity to categorize the physical objects that its symbols signify -- not merely after something has magically reduced them to a symbolic description. And the "shape" of sensorimotor capacity (like the shape of objects themselves) is not symbolic or arbitrary: it is analog and dynamic. This is not synesthesis, but esthesis, and it requires a mechanism for learning and identifying categories that a symbol system alone will always lack.
(Page 1 of 1, totalling 7 entries)
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 2.5 License.