On Sat, 22 Feb 97 15:23:23 GMT Harnad, Stevan wrote:
gd> symbols do have meaning I believe.
sh> Yes they do: for us. But they have no meaning for a pure symbol system
sh> (e.g., a computer). Hence we can't be pure symbol systems. So what else is
sh> there in our heads that might embody meaning rather than just symbols
sh> and symbol manipulation rules?
I did not mean to suggest that we could be pure symbol systems, I was
referring to the features/properties of objects which enable
categorisation by name and thus, among other things,also enable the
grouping of objects which have apparently common features. I would
probably have referred to these as extrinsic properties - ie the
properties of distal objects.
gd> However, there is no intrinsic way of recognising similarities
gd> of objects like a horse and a zebra except with yet more symbol(s).
gd> We would need a different symbol for each 'new' description.
sh> Kid-sib would have problems with "intrinsic": "Is there an "EXtrinsic"
sh> way? And what's that, then?"
"Intrinsic" referred to the meaning embedded in a symbol system.
Extrinsic would be the meaning embedded in the invariant properties of
the object ( ie., those which are generally sufficient to identify one
object from another.
sh> Even if we had symbol for each encounter with every entity we know, that
sh> would still just be symbols, whose shapes, remember, are arbitrary,
sh> unlike analog structures and processes, whose shape is NOT arbitrary;
sh> they resemble the objects of which they are the sensory "shadows."
sh> The activation of the analog processes and
sh> neural nets IS our seeing of the object.)
gd> So, Stevan suggests (in his grounding theory) that the mind uses base
gd> groups (or functions) which are known by their invariant properties of
gd> description (eg., those which would apply to a horse, to a zebra or to
gd> a donkey etc); to which can be hung perceptual variants such as piebald
gd> or stripes or big ears; which enable the identification of different,
gd> but functionally similar objects using cognitively constructed links.
sh> I'm not sure whether you have this one right: The only way we could
sh> recognise a zebra the very first time we saw one would be if (1) some
sh> person or book had told us that "a zebra looks like a black/white
sh> striped horse" and (2) we already knew what "horse" and "striped" etc.
sh> mean. If we knew horse and striped only from a verbal (= symbolic)
sh> description too, then once we went low enough in this abstract, verbal
sh> hierarchy, we would have to arrive at something other than just more
sh> symbols and descriptions: Analog projections and feature-detecting
sh> neural nets are candidates for what the bottom-up mechanism for
sh> grounding symbolic knowledge might be.
My proposition is that the invariant properties of description (ie
those which extrinsically are sufficient generally to visually identify
one object from another - by shadows or angles for example) are one
aspect of the structure of the mind which interacts with the symbol
system. Hence my observation ...
gd> Thus the visual system has been provided with a direct linkage with the
gd> symbol system even where a 'new' object has been seen for the very
gd> first time by that person.
sh> Not quite: SOME symbols (category names, arbitrary in "shape") are
sh> connected to the distal objects that they stand for by a mechanism that
sh> takes the proximal analog projection (shadow) on our sensory surfaces
sh> and filters out the features that allow us to identify (categorise,
sh> name) the distal object correctly. The knowledge we get from a symbolic
sh> description like a sentence describing a zebra must be grounded in the
sh> symbols that already have a connection to the distal object through
sh> analog projections and feature-detecting neural nets.
sh> That's just a theory, though, so don't go believing it. All you have
sh> to believe is that it can't just be symbols all the way down.
Stevan, on this last point are you saying that SOME objects
are not linked as you describe or that some symbols are not?
This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:23:50 GMT