> From: Head, Phineas <firstname.lastname@example.org>
> The one thing I have a slight difficulty with is
> that in Prof. Harnad's example of the telephone ringing, the
> 'phone itself is a man made object, its function and normal
> operating procedure, a human concept. Not having taught the
> computer that putting down the reciever is part of the
> 'rules' of the telephone answering 'game' is the same thing
> isn't? Or isn't it?
It doesn't matter whether the phone is natural or man-made.
The Frame Problem always crops up sooner or later either
way because symbolic "knowledge" cannot cover every contingency
(everything that might happen, or be the case). There's
just too much that can happen.
Note that Pat Hayes, who discovered the Frame Problem, describes it as
an inability to know what stays the same and what changes when
certain things happen (e.g., after you finish your phone call,
the programme doesn't "know" what becomes of the phone).
> I agree that we might be able to do some
> of the 3rd year mathematics paper by logic, chance and
> guesswork, but because we were in ignorence of the symbols
> and their rules, we wouldn't know if we got the right
> answer, even if we HAD! I think it boils down to another
> Chinese Room, essentialy.
Doing things rulefully with symbols and not knowing what it all means is
the Chinese room exactly.
> Finaly, do you think that artificial neural nets
> can "expand their frame"? I think they probably could.
You're using "frame" too metaphorically. The Frame Problem is
peculiar to a symbolic approach to knowledge (just as the
Symbol Grounding Problem is). Nets don't have any squiggles
and squoggles that are meant to be the knowledge of what
happens to phones after people answer them, so they have nothing
to have a frame problem with. (Of course nets also can't
do most of the things a symbol system can do either.)
This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:23:51 GMT