On Tue, 18 Apr 2000, Pentland, Gary wrote:
> Aren't our thoughts meaningless until they reach the part of our mind that
> is concious
That sounds like some sort of theory. It's certainly nothing you and I
KNOW about. I experience thoughts when I think them (and they are
meaningful then). But I have no idea what thoughts are, I have no idea
about what a mind is, let alone what a part of it is. And most of what
goes on in my HEAD (as opposed to my mind) seems to be unconscious; if
it weren't, AI/CogSci would be a lot easier to do...
> don't we do stuff sub conciously (like remembering to breath)
> those thoughts and processes are meaningless untill they interact with the
> world or our conciousness.
Most of what goes on in my head is unconscious; and if it means
anything, I certainly don't know anything about it. But again I don't
know what you mean by "interact with the world" or with "consciousness."
All I know is that my thoughts are conscious.
> If this is the case then are we trying to model just part of the mind (the
> concious part)?
Here is the "homunculus" problem (the problem of the little man in the
(1) I see objects (e.g., chairs), but I have no idea HOW I do that; the
underlying structures and processes in my head are unconscious.
(2) I think thoughts. I have no idea how; again, the underlying
structures and processes in my head are unconscious.
But it is a mistake to think that what is going on in my head is a
"conscious part" looking at an unconscious part, the way I look at
objects. Or a conscious part thinking thoughts, as I think thoughts.
To think of it that way is to think of a little man in the head, that
does inside my head, what I do, with my head. But we are trying to
explain what is going on in my head. If I suppose there is a homunculus
in my head, then I have to go on to explain what's going on inside IT's
head, and so on.
So let's just say that what we are trying to model is T3 capacity; and we
are just hoping (with Turing) that thoughts and consciousness will come
with the (T3) territory.
> > Harnad:
> > Why not try it? Just pick two words (say, "two" and "words") and swap
> > their meanings. Now grep a big text for instances of "two" and "words"
> > and show me how they all make systematic sense with the swapped
> > meanings. Here's a start: "The sentence has more than words than two."
> This would be understandable to the person that has the meanings, so if
> the symbols "two" and "words" were regrounded to everyone who would know
> the difference, surely comunication is the key to understanding meanings.
No, you are misunderstanding me. If "two" meant "words" and "words"
meant "two," the ordinary English sentences in which they occur would
NOT make sense if interpreted that way. It's not just a question of
two new vocabulary words. It means a sentence like "The sentence has
more than words than two" would not make sense if the meanings were
swapped. It would be like saying "The sentence has more than two than
words." What on earth is that supposed to mean? (And remember that EVERY
occurrence of either word in every sentence would be meaning-swapped.)
> You say 0 is less relative to 1 and more relative to -1, but as a concept
> 0 is meaningless, do you understand, or can you explain what IS nothing?
Most kids have no trouble understanding it if you say "In one of my hand
there is a candy and in the other there is nothing." They understand
"nothing" just as well as they understand "candy" (and on the basis of
comparable kinds of prior grounding experiences and explanations).
> > Harnad
> > The TT is not a proof. It just reminds you not to ask for MORE of a
> > model than you ask of on another.
> Has anyone tried to propose any sort of proof?
Proof of what? The Church/Turing Thesis (CT/T) or the Turing Test? In any case,
the answer is no for both. They are not the kinds of statements that are
amenable to proof. Only mathematical statements are. Not even scientific
laws (F = ma) are based on proof, just on evidence.
All evidence supports CT/T so far, but that's no more a proof than it was
a proof of Fermat's Last Theorem (until they really did prove it) that
every example tried supported it. The CT/T is that the Turing Machine
captures everything we mean by "computation." How can you prove we won't
ever come up with a counterexample.
TT is even worse: You could not more prove that every TT-passer has a
mind than you could prove that it hasn't one -- because of the
Other-Minds Problem (except in the special case of Searle's Periscope
and T2 in the Chinese Room).
> a simulation is a fake, like the simulated furnace that is not hot
> to touch.
A simulated furnace is a fake because you can feel that it doesn't
really get hot. But what can you see/feel about whether or not a
TT-passer has a mind?
> If a system passed the Turing test then it could claimed by
> some to have a mind, but if I know that it is a piece of software and there
> is a book saying how it works then it is a simulation as I feel that a
> real mind is unpredictable and will not conform to a book as it will the
> capacity to learn and change its mind about things, be convinced of facts
> that yesterday it thought were not fact.
What if the book also listed the algorithms for learning and changing its
mind about things?
(And what if your brain works according to those algorithms too, and
someone discovers them: when I see them written out, does that turn you
into just a "simulation"?)
This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:28 GMT