Tuesday, April 11. 2017
Singer, Peter (2017) (ed.) Does Anything Really Matter? Essays on Parfit on Objectivity Oxford.
“Mattering” is neither a logical nor a scientific matter. In a “zombie” universe — one that is like ours, but in which living organisms do not feel — nothing would matter. What happens to zombies does not matter. (Why would it matter? to whom? to what?)
So it is because organisms feel that things matter in the universe. Let’s simplify what they feel: pleasure and pain.
Suppose there could be a one-sided “pleasure universe”: Organisms do feel, but they only feel pleasure, nothing negative. There can be more pleasure and less pleasure, but feeling less pleasure would not feel “worse.”
In a pleasure-only universe, nothing would matter either. Less versus more pleasure would not matter. It would just be a fact, the way everything in the zombie universe is just a fact, and the way everything that has no effect (now or in future) on feeling organisms in our actual universe is just a fact.
It follows that it is only pain that matters, and the only ethical principle is to minimize pain.
But a complication of “negative utilitarianism” is conflict of interest —in things that matter, hence pain: A benign despot could do the utilitarian calculus and decide mechanically what must live and die in order to minimize overall pain. But individual (sentient) organisms in our world (and perhaps any viable world) -- and especially social, altricial mammals -- are designed so that their own needs, and the needs of those close to them, usually matter more to them than the overall utilitarian calculus. (There are exceptions.)
Ethics is about that conflict of interest in matters that matter to feeling organisms. A world with only one feeling organism would be a simpler matter.
Harnad, Stevan (2016) My orgasms cannot be traded off against others’ agony. Animal Sentience 7(18)
Saturday, January 7. 2017
On a long walk in Princeton many years ago I asked David Lewis whether the distinction between what’s necessary and what’s contingent might be just an epistemic (based only on what we do and don’t, can and can’t know), rather than an ontic one: The things we regard as necessary are the ones that are either provably necessary, on pain of formal contradiction with our premises, such as the fact that 29 is prime or that "p or q" implies p, or are thought to be “nomologically necessary,” based on current causal theory and evidence, such as that apples fall earthward rather than skyward because of gravity. The things we regard as contingent are just the ones that are not provably necessary, nor thought to be nomologically necessary.
In other words, the necessary/contingent distinction could be metaphysical, but it could also be that everything that is and that happens is necessary (could not have been otherwise), either formally or nomologically, but we just don’t always know the proof, or the laws/evidence/reasons. Contingency and possibility are just symptoms of our ignorance.
The idea has its homologue in metatheory of probability: What look like possibilities only look that way because of our ignorance. Everything is determinate and necessary; just some of it (unproved and unprovable theorems, the answers to NP-complete questions, many-body problems, even quantum indeterminacy), is uncertain, unpredicatable, its formal or causal story unknown or even unknowable. (No, I don’t think QM’s hidden necessity would be committed to the truth of hidden-variable theory.)
What would become of the realist view of necessity if everything were necessary? (Those are, of course, epistimic “woulds” and “weres”.)
This would not solve the "hard" problem of consciousness either because it’s not enough to say that our brains must produce consciousness: We still want to know, as with everything else, how and why. The hard problem is an epistemic one, of causal explanation.
And of course there’s a lot more at stake in asking whether the laws of nature themselves could have been otherwise than in pondering whether or not the various incarnations of the Ship of Theseus are the same ship.
Formalists in mathematics would then be pragmatists, in John Burgess's sense, but the law of non-contradiction would be the underlying realist constraint.
Non-ontic contingency would of course have implications for "possible worlds" theory, "concepts," and "free will."
"Uncomplemented Categories" (for which non-members do not exist) are admittedly problematic.
Sunday, April 24. 2016
I do not share Einstein’s view (if that was indeed his view, rather than just a verbal attempt to console someone for bereavement) that "time is an illusion."
I am of course not speaking of physicists’ “real” (objective) time but of our sense of time (a subjective state) — which is of course also related to our sense of bereavement (also a subjective state).
The same observation, but this time made about bereavement, points out the absurdity of calling it an illusion: “The ‘loss' of a loved one is an illusion, because we know from the conservation laws that matter is neither created nor destroyed.” That’s rather like saying “Your pain is an illusion because it is really just the jangling of C-fibres.”
In fact, we just have to go back to what Descartes pointed out to the skeptics with his Cogito: “You can be skeptical (uncertain) about the truth of anything (other than the formal mathematical truths that are proven true on pain of contradiction), even the regularities (laws) of science, the minds of other creatures, the existence of the palpable world, etc. That could all be mistaken. But you cannot be skeptical about the fact that whatever you are feeling while you are feeling it is indeed being felt. (In particular, whilst you are feeling that time is passing, or that a loved one has passed away, it is absurd to say that that feeling is an illusion.)
An illusion is a felt state. It may be wrong about the world — and in that sense an objective error, rather than an “illusion". But it cannot be an illusion that that feeling is indeed being felt (now). That’s the gist of the Cogito.
And after all, is it not feeling rather than objective truth that matters to us? Isn’t that what “mattering” even means? (Someone may reply: "I am a scientist. The only thing that matters to me is objective truth.” That may be (partly) true of that scientist, as a matter of taste. But that too is just a feeling. (And even determined scientists — and mathematicians — have other feelings too, feelings that can get the better of them just as they can with everyone else. And, as you point out, even the objective truths of science have to make themselves palpably — i.e. empirically — felt so that we can come to know them.)
I might add, by way of reply to Stephen Weinberg or any other scientistic wag inclined to overstate his tastes: Anyone who tries to draw the conclusion that agony is a farce is speaking nonsense as surely as if he is saying “P is not P.” (And, yes, this “insight” has to be based on negative feelings; it does not have the same force when stated as “orgasm is a farce,” which is rather closer to the truth…)
Perhaps a milder way of saying what I’ve just said is that scientists are not really being serious when they discount feelings as illusions, even though they feel they are being their most serious when they are doing so. Even nonsense can feel serious (and true)...
P.S. This is not a defence of "analytic metaphysics."
The exigencies (and nuances) of certitude (as opposed to mere truth)
"All this talk about time and subjectivity etc being an illusion is patent bullshit. If I am an illusion, then whose illusion am I? And if time is an illusion, then why am I getting unpleasantly older?"I’d actually say that the (cartesian) “I” in all this is not at all as indubitable as the feeling itself.
Yes, the nature of feeling is that it is felt, and that feeling seems to call for a feeler of the feeling; at least that’s what it feels like.
But we know that there are problems with the notion of continuity of personal (or, for that matter, any) identity; and that the only infallible thing about feeling is what it feels-like right now (not an instant later).
So both time and I are moot. The only sure thing is that THIS feeling is being felt right NOW (and perhaps that what it feels likes that it is being felt by a persistent me)…
Friday, July 24. 2015
In “What if Current Foundations of Mathematics are Inconsistent?” Voevodsky (2010) suggests that there are three options in light of Goedel’s theorems:
Either:But why make any mention of psychological states like “knowing” at all?
Surely, regardless of our intuitions, the only truths (besides the Cogito) that we can “know” to be true, i.e., certain (rather than just probably true on all available evidence) are the truths that we have proved to be necessarily true, on pain of contradiction
Why not the following?—
4. Admit that arithmetic’s consistency is provably unprovable, but that then it may either be (unprovably) true (rather than unprovably “known”) that arithetic is consistent — or it may be false that arithmetic is consistent.“Reliability” does not seem to be a valid substitute for provability-on-pain-of-contradiction. It would make mathematics into something more like inductive empirical science: provisionally true on the available evidence until/unless contradictory evidence is encountered. That is just the conjunction of 5 and 6. It also has some of the flavor of intuitionistic reasoning (insofar as the excluded middle is concerned).
As usual, this uncertainty only besets infinities, not finite constructions.
Or does the notion of “deductive rigor” all reside in the provability of consistency in nonfinite mathematics?
(The problem of possible mistakes in proofs (and the partial solution of computer-aided proofs) concerns another kind of reliability, and again seems to be a solution only for finite mathematics.)
Sunday, August 3. 2014
Il s'agit d'une entrevue à l'émission Sphère de la Radio-Canada concernant l'intelligence artificielle. Les animaux rentrent dedans, mais juste vers la fin et de façon très comprimée, car Sphère s'intéresse aux ordinateurs, pas aux animaux...
Une machine, c'est un mécanisme causal. Tous les organismes sont donc des machines, mais pas tous les organismes sont des machines sensibles [conscientes]. Par exemple, les plantes ne sont pas (j'espère) sensibles. Les organismes unicellulaires non plus. Les mammifères sont sensibles, tous les vertébrés le sont, les invertébrés aussi -- toutes les espèces qui ont un système nerveux (y compris la nocicéption: le sens de la douleur).
[C'est pour ça que le concept de « spécisme » est incohérent, en ce qui concerne l'argument contre le carnivorisme chez l'humain: les plantes sont des espèces d'être vivant aussi, donc si c'était parce que c'est « spéciste » qu'il ne faut pas manger la viande, alors les véganes seraient spéciste aussi. -- Donc, non, c'est parce qu'il ne faut pas manger les être sensibles (car ça leur cause la souffrance inutilement) qu'il ne faut pas manger les animaux -- et c'est pour ça que c'est si important de leur accorder le statut d'être sensibles devant la loi, comme exigé par le Manifeste lesanimauxnesontpasdeschoses.]
« Intelligent » , c'est juste un louange, un compliment, un adjectif. « Elle est intelligente, lui pas. » « Ça, c'est pas très intelligent. » Il n'y a pas une sciences de ces adjectifs. Ce qu'on étudie en sciences cognitives c'est la cognition, ce qui veut dire, la pensée . Les humains sont des êtres pensants. C’est ça le « Cogito » de Descartes. Mais qu’est-ce que c’est que de penser? Nous savons tous que c’est quelque chose qui ce déroule dans nos têtes, dans nos cerveaux, mais qu’est-ce?
C’est les sciences cognitives qui cherchent à découvrir ce que c’est que de penser, donc ce que c’est d’être une machine pensante. Le mathématicien Alan Turing a proposé il y a 65 ans que la méthode pour découvrir et expliquer ce que c’est que la pensée (donc ce que c’est que la cognition), c’est de construire une machine qui sera capable de faire tout ce que peut faire un être (humain) pensant, donc tout ce que nous pouvons faire. Lors qu’on aura construit une machine qui à la capacité de faire tout ce que nous pouvons faire (se ballader dans le monde, reconnaître les objets, apprendre, parler, etc. exactement comme nous) et qui pourra faire tout ça à tel point qu’on ne peut plus le distinguer d’un d’entre nous — durant toute une vie le cas échéant — alors le fonctionnement du mécanisme interne de cette machine, qui génère cette capacité de faire tous ce qu'un humain est capable de faire -- sera la cognition, la pensée. Le candidat aura réussi le « Test de Turing, et les sciences cognitives auront expliqué la cognition.
Mais où en est la sensibilité dans tout ce savoir-faire? C’est pour ça qu’en milieu de cours en sciences cognitives je choisis toujours un des étudiants (disons Alex) que tout le monde connaît très bien et je demande aux autres: “ Et si on apprenait maintenant que Alex avait été construit il y a trois ans à MIT, est-ce que vous vous senteriez à l’aise de lui donner un coup de pied ? “
Presque tout le monde répond: Non. Ça serait immoral de lui donner un coup de pied. Et c’est parce que penser, c’est un état mental, un état sensible: ça resemble à quelque chose d’être dans cet état, et nous savons tous à quoi ça resemble. Donc si un être réussit le Test de Turing nous savons tous — à l’exception des psychopathes — que nous n’avons ni plus ni moins de raison de conclure que l’être est sensible, comme nous, que nous avons avec les membres de notre espèce: C’est ça de ne pas pouvoir distinguer. Et c’est ça la substance du Test de Turing.
Et bien, ni les adultes qui sont sévèrement handicapés mentalement, ni les bébés, ni les animaux n’ont la capacité de passer le Test de Turing, mais nous savons tous qu’ils sont sensibles, Donc c’est sûrement tout aussi immoral de leur donner un coup de pied que de le donner à Alex. Si les plantes étaient sensibles aussi, on n’aurait pas de choix. Mais heureusement c’est presque certain qu’elles ne sont pas sensibles. Et donc on a le choix…
Tuesday, February 11. 2014
On Plantinga on "Is Atheism Rational?"
What a godawful congeries of sophisms — and such feeble ones it’s hardly worth the effort to state the obvious….
Running through it all is the same howler that wobbled Pascal’s Wager: the Judeo-Christian voodoo is just one of a whole motley of competing screeds on offer on this "fine-tuned" planet, all equally arbitrary and absurd, all equally at odds with all evidence and reason — and all in contradiction with one another. Yet Plantinga’s pietist putty is applicable to any of them!
It’s already sophistical to cast it as "atheism vs theism": There are a lot more voodos on offer than just Plantinga's preferred one, including the Dawkins/Russell one-eyed, one-horned flying purple people-eater.
So it’s not "A vs. not-A" (50/50): it’s V1 vs V2 vs V3…. vs. Vn... vs. ordinary reality. And Plantinga suggests that "agnosticism" is a more rational stance than to chuck the whole vat of V’s? Then I need to be agnostic about every bit of supernatatural delusion that any raving madman ever dreams up!
Only the reveries that are backed up by transcendental experience of personal union with the “divine"? Which one(s)? Every mescal-button hallucination anyone has ever had? And that’s supposed to substitute for sense and evidence?
(This time the relevant quip is not Russell’s orbiting teapot but the one about W. James’s mate who knew the secret of the universe whenever he sniffed nitrous oxide -- and ’twas: “Higamus Hogamus Men are Polygamous…”)
And I find that sociopathic Christian scat — that can serenely survey the planet’s Jovial panorama and squeeze out of that squalor the most “perfect world” with the help of some of the sappiest of eschatological claptrap — to be the most offensive of all. At least the karmic creeds are not so sanctimonious…
Bref: The shenanigans going on here are worthy of an OJ Simpson Dream-Team Defence summary…
Monday, February 18. 2013
On 2013-02-18, at 9:09 AM, Consciousness Online [Richard Brown] wrote:
COUNTING THE WRONG CONSCIOUSNESS OUT
[Commentary on Dan Dennett on "On a Phenomenal Confusion about Access and Consciousness"]
Yes, there was a phenomenal confusion in doubling our mind-body-problems by doubling our consciousnesses.
No, organisms don't have both an "access consciousness" and a "phenomenal consciousness."
Organisms' brains (like robots' brains) have access to information (data).
Access to data can be unconscious (in organisms and robots) or conscious (in organisms, sometimes, but probably not at all in robots, so far).
And organisms feel. Feeling can only be conscious, because feeling is consciousness.
So the confusion is in overlooking the fact that there can be either felt access (conscious) or unfelt access (unconscious).
The mind-body problem is of course the problem of explaining how and why all access is not just unfelt access. After all, the Darwinian job is just to do what needs to be done, not to bask in phenomenology.
Hence it is not a solution to say that all access is unfelt access and that feeling -- or the idea that organisms feel -- is just some sort of a confusion, illusion, or action!
If, instead, feeling has or is some sort of function, let's hear what it is!
(Back to the [one, single, familiar] mind/body problem -- lately, fashionably, called the "hard" one.)
To comment further, please go to Philpapers.
Organisms with nervous systems don't just do what needs to be done in order to survive and reproduce. They also feel. That includes all vertebrates and probably all invertebrates too. (As a vegan, I profoundly hope that plants don't feel!)
There's no way to know for sure (or to "prove") that anyone else but me feels. But let's agree that for vertebrates it's highly likely and for computers and today's robots (and for teapots and cumquats) it's highly unlikely.
Do we all know what we mean when we say organisms feel? I think we do. I have no way to argue against someone who says he has no idea what it means to feel -- meaning feel anything at all -- and the usual solution (a pinch) is no solution if one is bent on denying.*
You can say`'I can sorta feel that the temperature may be rising" or "I can sorta feel that this surface may be slightly curved." But it makes no sense to say that organisms just "sorta feel" simpliciter (or no more sense than saying that someone is sorta pregnant):
The feeling may feel like anything; it may be veridical (if the temperature is indeed rising or the surface is indeed curved) or it may be illusory. It may feel strong or weak, continuous or intermittent, it may feel like this or it may feel like that. But either something is being felt or not. I think we all know exactly what we are talking about here. And it's not about proving whether (or when or where or what) another organism feels: it's about our 1st-hand sense of what it feels like to feel -- anything at all. No sorta's about it.
The hard problem is not about proving whether or not an organism or artifact is feeling. We know (well enough) that organisms feel. The hard problem is explaining how and why organisms feel, rather than just do, unfeelingly. (Because, no, introspection certainly does not tell us that feeling is whatever we are doing when we feel! I do fully believe that my brain somehow causes feeling: I just want to know how and why: How and why is causing unfelt doing not enough? No "rathering" in that!)
After all, on the face of it, doing is all the Blind Watchmaker really needs, in order to get the adaptive job done (and He's no more able to prove that organisms feel than any of the rest of us is).
The only mystery is hence how and why organisms feel, rather than just do. Because doing-power seems like the only thing organisms need in order to get by in this Darwinian world. And although I no more believe in the possibility of Zombies than I do in the possibility of their passing the Turing Test, I certainly admit frankly that I haven't the faintest idea how or why there cannot be Zombies. (Do you really think, Dan, that that's on a par with the claim that one hasn't the faintest idea what "feelings" are?)
*My suspicion is that the strategy of feigning ignorance about what is meant by the word "feeling" is like feigning ignorance about any and every predicate: Whenever someone asks what "X" means, I can claim I don't know. And then when they try to define "X" for me in terms of other predicates, I can claim I don't know what those mean either; all the way down. That's the "symbol grounding problem," and the solution is direct sensorimotor grounding of at least some of the bottom predicates, so the rest can be reached by recombining the grounded ones into propositions to define and ground the ungrounded ones. That way, my doings would contradict my verbal denial of knowing the meanings of the predicates. But of course sensing need not be felt sensing: it could just be detecting and responding, which is again just doing. So just as a toy robot today could go through the motions of detecting and responding to "red" and even say "I know what it feels like to see red" without feeling a thing, just doing, so, in principle, might a Turing-Test-Passing Cog just be going through the motions. This either shows (as I think it does) that sensorimotor grounding is not the same as meaning, or, if it doesn't show that, then someone still owes me an explanation of how and why not. And this, despite the fact that I too happen to believe that nothing could pass the Turing Test without feeling or meaning. It's just that I insist on being quite candid that I have no idea of how or why this is true, if, as I unreservedly believe, it is indeed true. It's an ill-justified true belief. Justifying it is the hard problem.
@Richard Brown: "felt representing (i.e. consciousness) occurs when one represents oneself as being in some other representation in a way that seems subjectively unmediated... There is no equivocation here; the claim is that feeling (i.e. consciousness) consists in a certain kind of cognitive access. What’s the argument against this view? That there can be these kinds of representations without feeling? That is called begging the question."The argument against this claim is that it is an ad hoc posit: an attempt to solve a substantive problem by definition.
My critique is on-topic (access vs. feeling), the matter is far from settled, and neither your comments nor mine prevent Dan or anyone else from responding.
Tuesday, December 25. 2012
All agree that speculations, even if they come from mathematics that seems to make sense, still need evidence in order to be believed. And a lot of the speculations about multiple "universes" seem to be beyond observational evidence, at least for now.
But it seems to me that some of the puzzlement comes from calling these hypothetical entities multiple "universes," of which "ours" is also a "universe."
What is a universe? If there can be multiple galaxies then why can't there be multiple entities that are bigger than galaxies and include galaxies? Let's call them "sub-universes," and let's say that (hypothetically) they may resemble one another in various ways, but be "out of touch" (out of observational reach) of one another. That makes them more like some of the unobservable microcomponents (like strings and unbound quarks) that are much less far-fetched than the notion of there being more than one "universe."
(That said, I think the multi-sub-universe consisting of all the possible histories since the Big-Bang is too far-fetched to take seriously no matter what we call it. -- I also think the notion of multi-sub-universes does not really give us any insight into either the probability or the "inevitability" of life.)
Friday, July 13. 2012
Friday, February 24. 2012
Bernie Baars: "Stevan, I think that may be the key to our disagreement. The evidence (and scientific consensus) regarding unconscious knowledge is simply overwhelming."It may well be (part of) the key to our disagreement, but not at all because I question the evidence concerning unconscious "knowledge"!
Unconscious knowledge is the unconscious possession of information (data, capacity, propensity). I have no problem at all with unconscious information, nor with any unconscious function.
My problem (the "hard" problem) is with conscious function, including conscious information (data, capacity, propensity).
If all "knowledge" were unconscious, there would be no hard problem, and we would not be discussing consciousness here (just perhaps the "easy" functional matter of voluntary versus involuntary behavior and accessible versus inaccessible internal information).
And it is precisely for that reason that I keep harping on the fact that it is only because we allow ourselves to keep invoking weasel-words for consciousness ("awareness, subjectivity, intentionality, mentality, 1st-personality, qualia," etc. etc.) -- which are really just vague and hopeful synonyms -- that we keep fooling ourselves that we are making some headway on the hard one.
To keep ourselves honest and grounded, we should ditch all the other locutions and stand-ins for "conscious" and just resort to "felt" vs. "unfelt": That would make the question-begging (and even the incoherence) transparent whenever we inadvertently fall into it.
And the question-begging and incoherence here was precisely the notion of an "unconscious headache" -- which, when stated transparently, without equivocation, would be an "unfelt ache," which amounts to an "unfelt feeling": a contradiction in terms (like an uncurved curve or a colorless color).
Feeling (not "intentionality") is the "mark of the mental." What is not felt is not conscious. And the hard problem is to explain how and why anything at all is felt (hence mental), anywhere, ever.
Information accessibility is not what it's about. There would be accessible as well as inaccessible information inside an insentient (= unconscious) robot (as well as inside a hypothetical "zombie," for those who are fond of those sci-fi fantasies of speculative metaphysicians).
Bernie Baars: "Autobiographical memories are unconscious (until recalled)."And the problem is not with the fact that the stored information is there, nor the fact that it is used and plays a causal role in adaptive function, nor even with the fact that it can be made explicit and verbalized. The problem is with the fact that recall is conscious recall -- i.e., felt recall -- rather than just recall!
Bernie Baars: "So are unaccessed ambiguities in language, vision, and other functions."Right. And the problem is not with access, but with conscious (felt) access.
Bernie Baars: "The cerebellum is unconscious; so are basal ganglia functions."Indeed. And the problem is not with cerebellar and basal ganglion functions, but with conscious (felt) functions.
Bernie Baars: "The corticothalamic system (under the proper conditions) is not."Translation: Corticothalamic functions (some, sometimes) are felt rather than unfelt.
The Problem: How and Why?
(Otherwise, all you have is an unexplained correlation, not a causal explanation of how and why some functions are felt functions.)
Bernie Baars: "Habituated input is unconscious. Automatisms are unconscious. Implicit motivation, implicit learning, incubation, preconscious perception, long-term ego functions, and yes, demonstrated cases of suppressed thoughts are unconscious."All just fine. And no problem.
And if all functions were like that (unfelt) there would be no problem at all.
But they're not.
And that's the (hard) problem.
Bernie Baars: "The evidence is simply enormous. You can be a radical subjectivist on those matters, but you will be in a small and diminishing minority. And what’s worse, you lose a ton of explanatory power."I have no idea what a "radical subjectivist" is!
I am just pointing out (each time) that it is indeed a problem to explain how and why all functions are not unfelt: to explain how and why we are not zombies, if you like. (We certainly aren't: how and why not? What's the functional advantage? What's the causal difference?)
The absence of an answer (or the failure even to face the problem) is the absence of explanatory power.
Bernie Baars: "I think this may be the key to our mutual incomprehension. (Decontextualized comprehension is also unconscious)."I agree that there is indeed misunderstanding here, but I am not sure it is mutual! I think I understand completely what you are saying, Bernie, but I am not sure you are understanding -- or appreciating the implications of -- what I am saying (about the failure and indeed the vacuity of all attempts at causal explanation of consciousness).
(I have no idea what "decontextualized comprehension" means, but the problem, as usual, is conscious [i.e., felt] comprehension, not comprehension simpliciter, which is simply the possession of information and the capacity to act accordingly -- including, if necessary, to verbalize it!)
Harnad, S. (1992) There is only one mind body problem. International Journal of Psychology 27(3-4) p. 521
Thursday, June 23. 2011
"Wouldn't it short-circuit all these discussions if you just came out and said that this is how you use the word "Feeling", that is, to mean any conscious notion or awareness whatever, even if it is not a sensation like taste or pain or fear? You say "feeling" is a nice honest word, while words like "awareness" and "conscious" are weasel words. But since a lot of us cannot agree that wondering idly whether it will rain next Tuesday is a feeling, then when you say it is because it just has to be, good old honest-yeoman uncorrupt "feeling" slips into weaseldom, or at least mush, just as all the other words do.Very good challenge, and I'm happy to try to rise to the occasion!
The brain not only can but does "deliver information" without its being felt. Not only delivers information, but gets things done.
It does nocturnal deliveries while we're asleep, of course, but it also does a lot while we're awake (keeps my heart beating, keeps me upright, and, most important, delivers answers to my (felt) questions served on a platter ("what was that person's name?", "where am I going?", "what word should I say next?) without me feeling any of the work that went into it.
These are things we do, and feel we do ("find" the name, "recall" where I'm going, "decide" what to say next), but we are clueless about their provenance: We have no idea how we do them. Our brain does them, and then "delivers" the result.
Some of this delivery is delivery of know-how (riding a bike, speaking) and some of it is of know-that (facts, or putative facts).
We are the "recipients" of the delivery, and the question is, how does our brain do it?
But these are the "easy" questions: Cognitive neuroscience will eventually tell us how our brain does and "delivers" all these things for us.
But that's not the hard part. The hard part is explaining why and how it feels like something to be the "recipient" of these "deliveries." If the result of the deliveries were merely doings and sayings, there would be no issue, because there would be nothing mental; it would all just be mechanical, neurosomatic dynamics.
Now, you are sort of forcing me to do some phenomenology here -- something I'm neither particularly good at, nor set great store by, but here goes:
Am I just linguistically legislating that having received a "delivery," [say, the "information," X, that it's Tuesday today] from their brain, what people mean by "I am aware of X" has to be "It feels as if X is the case"?
Or, worse, am I presumptuously denying what is not only other people's private privilege but (by my own lights) certain and incorrigible, when I say that people are wrong when they insist it doesn't feel like anything to know it's Tuesday? Wrong to just settle for saying they just know it, it's one of those pieces of "information delivered" by their brain, and that's all there is to it?
That would be fine, it seems to me, if the "delivery" were taking place while you were asleep or anesthetized or comatose.
But it seems to me (and here I am doing some amateur phenomenology) that the difference between being (dreamlessly) asleep and being awake is that it feels like something to be awake and it does not feel like anything to be dreamlessly asleep.
"Information" "delivered" and even "executed" by my brain while I am asleep is also being served on a platter, just as it's served on a platter when I'm awake: I'm just not feeling anything the while.
So far you will say you could have substituted "not aware of (a 'delivery')" for "not feeling (a 'delivery')" and covered the same territory without being committed to its having to feel like something to be aware of something.
But I can only ask, what does it mean to be awake and aware of something if it does not feel like something to be awake and aware of something?
If you reply "It feels like something to be aware of something, but only in the sense that it feels like something while I'm being aware of something, because I happen to be awake, and being awake feels like something" -- then I will have to reply that you are losing me, when you say that it feels like something while you receive the "delivery" but that that something it feels like is not what it feels like to receive the delivery!
Yes, our language about this is getting somewhat complicated, so let me remind you that, yes, our difference could be merely terminological here, for much the same reason that (if I remember correctly) you had objected, years ago, to my insistence that seeing, too, is feeling.
I think you said that feeling tired is feeling, or feeling anger is feeling, and even feeling a rough surface is feeling, but seeing red is not feeling, it's seeing. And the way I tried to convey what I meant by "feel" was to point out that you too would agree (and you did) that it feels like something (rather than nothing) to see red. And it feels like something different to see green, or to hear middle C or to smell a rose.
I think I even said that it was just our language -- which says I am feeling a headache or I am feeling cold or I am feeling a rough surface, yet not "I am feeling red" but rather "I am seeing red," and not "I am feeling the perfume" (if we don't mean palpating it but sniffing it) but "I am smelling the perfume" -- is fooling us a bit, when we conclude from our wording that seeing is not feeling.
I think I even mentioned French, in which both feeling and smelling are (literally): "je sens la douleur", "je sens le parfum," as is palpating ("je sens la surface"), whereas, as in English, seeing and hearing have verbs of their own.
There is in the French the residue of the Latin "sentio" -- to feel -- that still exists in English, but as a sort of ambiguous false-friend, "I sense," which means more "I intuit" or "I pick up on" than "I feel." But I would say the same thing about sensing: If I sense something, be it sensory, affective, tactual, thermal, cognitive, or intuitive, then it feels like something to be sensing it, and would feel like something else to be sensing something else, as surely as it feels like something to be seeing red and would feel like something else to see something else.
And not just because I happen to be awake while my brain "delivers" the "information"!
So if I am sensing that it's Wednesday today, then that feels like something, and feels like something different from sensing that it's Tuesday today as surely (but perhaps not as intensely) as seeing red feels different from seeing blue.
To put it another way, the result of the "delivery" is not just my "speaking in tongues." It feels like something not only to say (or think) the words "It's Wednesday today" but to mean them. And it feels like something else not only to say (or think) but to mean (or understand) something else.
Tuesday, May 3. 2011
(Reply to Antonio Chella & Riccardo Manzotti)
Antonio Chella & Riccardo Manzotti suggest that since we know that feeling exists, any explanation that cannot account for it is inadequate. They also suggest that there is a difference between functional explanation and causal explanation, illustrating the difference with examples from physics. Functional explanation may not explain feeling, but causal explanation may succeed, perhaps partly by scrapping the distinction between states that are internal and external to the brain:
CHELLA & MANZOTTI: "since the fact that we feel is an empirical[ly] undeniable fact albeit from a first-person perspective, we should argue against any view that does not predict such possibility."Except if no causal theory can explain feeling -- in which case we are better off with one that can at least explain doing than with no eplanation at all.
CHELLA & MANZOTTI: "If feeling [does] not fit into the functional description of reality, so much the worse for functionalism."So much the worse for any causal explanation. The Turing Robot is "merely" indistinguishable from is in performance capacity, but the Turing biorobot also has equivalent internal processes and states, even if synthetic ones. That's still normal causal explanation, and remains so even if the biodynamics are natural rather than synthetic.
In other words, there is no wedge to be driven between "functional" explanation and "causal" explanation: All dynamical explanations of feeling are equally ineffectual, for the same reasons: There is neither any causal room for feeling, nor is there any causal need for them.
CHELLA & MANZOTTI: "we purposefully shifted from a causal description to a functional one"But unfortunately it is a distinction that marks nothing substantive, and does not solve the "hard" problem of explaining how and why we feel.
CHELLA & MANZOTTI: "the equations for gravity and electromagnetism have the same form… The two cases are functionally identical. Yet, they are different both in causal and in physical terms since the physical properties (or powers) which are responsible for the two situations are very different (on one hand, mass and gravity and, on the other hand, electric charge and electromagnetic force)"The equations are equivalent at one level of description, but they are not a complete description. Both mass and charge are measurable, describable, predictable physical properties -- unlike feelings, which certainly exist, but do not otherwise enter into the causal matrix.
CHELLA & MANZOTTI: "What is still missing is a theory outlining a conceptual and causal connection between neural activity and phenomenal experience and functionalism does not seem to possess the resources to do it."Nor does any other causal theory.
CHELLA & MANZOTTI: "[In] Harnad’s… conception… internal and external… refer to physical events internal or external to the brain as if the brain boundaries were some kind of relevant threshold…"Yes, mental states (feelings) -- for which I recommend a migraine headache as a paradigmatic example -- occur in the head, not outside it. Both doings and their functional substrate can be distributed beyond the bounds of a head, but feelings (until further notice) cannot...
For a critique of the notion of the "extended mind," see:
Dror, I. and Harnad, S. (2009) Offloading Cognition onto Cognitive Technology. In Dror, I. and Harnad, S. (Eds) (2009): Cognition Distributed: How Cognitive Technology Extends Our Minds. Amsterdam: John Benjamins
CHELLA & MANZOTTI: "assuming that the mind is indeed internal to anything may be a misleading"It is misleading to mix up "in the head" with "in the mind." But "mind" is a weasel word. To have a mind is to feel. And there is no reason to doubt that a headache cannot be wider than a head...
Friday, April 29. 2011
In my little essay I tried to redraft the problem of consciousness -- the "mind/body problem -- as the problem of explaining how and why we feel rather than just do.
It was not meant as a terminological exercise. The usual way we talk about consciousness and mental states uses weasel-words ("conscious," mental," "experience") that are systematically ambiguous about whether we are just talking about access to data (an easy problem, already solved in principle by computation, which is simply an instance of doing) or about felt access to data (the hard part being to explain not just the doing but the feeling).
Nor was it meant as a metaphysical exercise: The problem is not one of "existence" (feeling indubitably exists) but of explanation: How? Why?
The commentaries were a fair sample, though a small one, of the issues and the kinds of views thinkers have on them today. A much fuller inventory will be presented at the 2012 Summer School on the Evolution and Function of Consciousness in Montreal June/July of next year. Think of this small series of exchanges in the On the Human Forum as an overture to that fuller opus.
I have already responded in detail individually to each of the 10 commentators (15 commentaries) so I will just summarize the gist here:
Judith Economos rightly insists, as the only one with privileged access to what's going on in her mind, that it is not true that she feels everything of which she is conscious: Some of it -- the part that is not sensory or emotional -- she simply knows, though it doesn't feel like anything to know it. I reply (predictably) that "know," too, is a weasel-word, ambiguous as between felt and unfelt access to data. So if one is awake (conscious) whilst one is knowing, one is presumably feeling something. One is also, presumably, feeling something whilst one is not-knowing something, or knowing something else. If all three of those states feel identical, how does one know the difference? For if "knowing" just refers to having data, then it is just a matter of know-how (doing), which is already explained (potentially) by computation, and has nothing to do with consciousness.
Galen Strawson seems to agree with me on the distinction, but prefers "experience" ("with qualitative character") to "feeling." Fine -- but "experience" alone is ambiguous; and trailing the phrase "with qualitative character" after it seems a bit burdensome to convey what "feel" does in one natural, intuitive, monosyllabic swoop. The substantive disagreement with Galen is about the coherence and explanatory value of "panpsychism" (i.e., the metaphysical hypothesis that feeling, or the potential to feel, is a latent and ubiquitous property of the entire universe) as a solution to the hard problem. The existence of feeling is not in doubt. But calling it a fundamental take-it-or-leave-it basic property of the universe does not explain it; it's just a metaphysical excuse for the absence of an explanation!
Shimon Edelman is more optimistic about an explanation because there are computational and dynamic ways to "mirror" every discriminable difference (JND) in a system's input in differences in its internal representations. This would certainly account for every JND a system can discriminate; but discrimination is doing: The question of how and why the doing is felt is left untouched.
David Rosenthal interprets the experimental evidence for "unconscious perception" as evidence for "unconscious feeling," but, to me, that would be the same thing as "unfelt feeling", which makes no sense. So if it's not feeling, what is unconscious "perception"? It is unconscious detection and discrimination -- in other words, internal data-doings and dispositions that are unproblematic because they are unfelt (the easy problem). If all of our know-how were like that, we'd all be Zombies and there would be no hard problem. David needs unconscious perception to be able to move on to higher-order consciousness (but that is, of course, merely higher-order access -- the easy part, until/unless feeling itself is first explained). So this seems like recourse to either a bootstrap or a skyhook.
John Campbell points out that sensorimotor grounding is not enough to explain meaning unless the sensing is felt, and I agree. But he does not explain how or why sensorimotor grounding is felt.
Anil Seth reminds us that many had thought that there was a "hard problem" with explaining life, too, and that that turned out to be wrong. So there's no reason not to expect that feeling will eventually be explained too. The trouble is that apart from the observable properties of living things ("doings") there was never anything else that vitalists could ever point to, to justify their hunch that life was inexplicable unless one posited an "elan vital." Modern molecular biology has since shown that all the observable properties of life could be explained, without remainder, after all. But in the case of feeling there is a property to point to -- observable only to the feeler, but as sure as anything can be -- that the full explanation of the observable doings leaves out and hence cannot account for. (Perhaps feeling is the property that the vitalists had in mind all along.)
The remaining commentaries seem to be based on misunderstandings:
Bernard Baars took "Turing Robot" to refer to "Turing Machine." It does not. A Turing Machine is just a formalization of computation. The internal mechanism of a Turing Robot can be computational or dynamical (i.e., any physical process at all, including neurobiological).
Krisztian Gabris thinks feelings are needed to "motivate" us to do what needs to be done. That's certainly what it feels like to us. But on the face of it, the only thing that's needed is a disposition to do what needs to be done. That's just know-how and doing, already evident in toy robots and toasters. How and why it (sometimes) feels like something to have a disposition to do something remains unexplained.
Joel Marks assumed that the Turing Robot would be an unfeeling Zombie. This is not necessarily true. (I think it would feel -- it's just that we won't be able to know whether it feels; and even if it does feel, we will be unable to explain how or why.) Hence Joel's question about whether it would be wrong to create a robot that feared death is equivocal: By definition, if it's a Zombie, it cannot fear, it can only act as if it feared. (Witnessing that may make us feel bad, but the Zombie -- if there can be Zombies -- would feel nothing at all.) And if the Turing Robot feels, it's as important to protect it from hurt as it is to protect any other feeling creature from hurt.
Wednesday, April 27. 2011
(Reply to Galen Strawson-2)
Galen Strawson does a brilliant, heroic job with panpsychism:
The only thing we know for sure -- indeed, with a Cartesian certainty that is as apodictic as the logical necessity of mathematics -- is that and what we feel.
Everything else we know (or believe we know), we likewise know "through" feeling -- in that it feels like something to learn it and it feels like something to know it.
(It feels like something to make an "empirical" observation. It feels like something to understand that something is the case. It feels like something to understand an inference or a causal explanation.)
So feeling is certain, whereas physics ("doing," in my parlance) is not certain.
But we are realists, trying to do the best we can to explain reality -- not extreme sceptics, doubting everything that is not absolutely certain, even if it's highly probable.
We are just looking for truth, not necessarily certainty.
"Experience" is a weasel-word because it can mean either feeling something -- which is highly problematic (the "hard problem) -- or it can just mean acquiring empirical data (as in: "this machine had the solution built in, that machine learned it from experience") -- which is unproblematic (doing, the "easy" problem).
So whereas it is true that the only thing we know for sure (besides the things that are necessarily true on pain of contradiction) is that feeling exists, neither everyday life nor science requires certainty. High probability on the evidence (data) will do.
And although it is true that all evidence is felt evidence, it is only the fact that it is felt that is certain. The evidence itself (doing) is only probable.
In other words, although they always accompany the data-acquisition (doing), the feelings are fallible. We feel things that are both true and untrue about the world, and the only way to test them out is via doings. It is true that the data from those doings are also felt. But the felt data are answerable to the doings, and not to the fact that they are felt.
And not only are our feelings fallible, as regards the truth: they also seem to be causally superfluous. Doings (including data-acquisition) alone are enough, for evolution, as well as for learning. Some doings are undeniably felt, but the question is: how and why?
When we are doing physics (or chemistry, or biology, or engineering) and causal explanation (rather than metaphysics), we have to explain the facts, amongst which one fact -- the fact that we feel -- seems pretty refractory to any sort of explanation except if we suppose that feeling is simply a basic property of the universe (whether local to the organisms in the earth's biosphere [Galen's "micropsychism"] or somehow smeared all over the universe ["panpsychism"].)
There's no doubt that feeling exists, so in that sense feeling is indeed a property of the universe. But with all other properties -- doings, all -- we have become accustomed to being able(in practice, or at least in principle) to give a causal explanation of them in terms of the four fundamental forces (electromagnetism, gravitation, strong subatomic, weak subatomic). Those forces themselves we accept as given: properties of the universe such as it is, for which no further explanation is possible.
Galen's metaphysics would require adding something like a fifth member to this fundamental quartet -- feeling -- with the difference that, unlike the others, it is not an independent force, it does not itself cause and thereby explain doings causally, but rather is merely correlated with them, inexplicably, for some doings.
And our justification for adding a fifth acausal force? The fact that it is inexplicably (but truly) correlated with some doings (all doings that we feel). If feeling had truly been a 5th force (causal rather than acausal), namely, "psychokinesis" ("mind over matter"), then that would indeed have merited elevating it to fundamental status, exempt from further explanation along with the other four.
But there is not a shred of evidence for psychokinesis as a causal force (and all attempts to measure psychokinesis have failed, because the other four forces already covered all the causal territory -- doing -- with no remainder and no further room for causal intervention).
So all we have, inexplicably, is the fact that we feel. I don't think that that fact warrants any further metaphysics than that: feeling definitely exists -- and, unlike anything else, exists with certainty rather than just probably. It also happens to feel like something to find out and understand anything we know. The rest is an epistemic problem: why and how does getting or having data feel like something (for feeling creatures like us)?
Neither "micropsychism" nor "panpsychism" answer this question. They just take it for granted that it is so.
HOME TRUTHS ABOUT FEELING, DOING, EXPLAINING AND ROBOTS (Reply to Shikha Singh)
Doings are observable by anyone (via senses or senses plus measuring instruments).
Feelings are observable only to their feeler.
The only feelings a feeler can feel are his own.
That other people and animals feel is a safe guess, because they are related to and resemble us.
That today's man-made robots feel is as unlikely as that a toaster or stone feels.
That a robot whose doings are Turing indistinguishable from the rest of us for a lifetime would feel would be almost as safe a guess as that other people and animals feel. (Perhaps a biorobot would be an even safer guess).
A robot is just an autonomous causal system that can do some things that people and animals can do.
Cognitive science is about discovering the causal mechanism that generates our capacity to do what we can do. (We can think of it as discovering what kind of robots we are.)
No one but the Turing robot can know whether its causal mechanism does generate feeling.
And even if it does, not even the Turing robot can explain or know how or why.
(Page 1 of 4, totalling 51 entries) » next page
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 2.5 License.