Introduction to AI

What is AI?

CM317 Lecture Notes

1. What is AI?

To answer this question, we first have to answer:

What is/isn't intelligent?and What is/isn't artificial.

1.1. What is/isn't intelligent?

We have a few choices here:

(i) Everything task that exceeds a certain level of complexity requires intelligence to perform. So any system that can perform tasks of that level of complexity is intelligent.
The problem with this view is specifying that level of complexity, and saying what makes it special -- special enough so that everything below it can be called "unintelligent" and everything about it can be called "intelligent."

A second choice:

(ii) Whatever normally requires intelligence (say human intelligence) to do, if it is done by any system, is intelligent.
For example, doing arithmetic normally requires intelligence. So anything that can do arithmetic is intelligent.

The problem with this view (if it is a problem) is that it defines intelligence in terms of performance:"intelligence is as intelligence does," and then it grounds it in human (or animal) "intelligence": "whatever can do what humans (animals) can do is intelligent."

But not everything humans (animals) do is intelligent. So we would first have to say what human performances are intelligent (and perhaps even why, and how) and then say that whenever those are performed, by any system, they are intelligent. (Which performances are those then, if walking is not one of them but doing arithmetic is?)

A third choice:

(iii) Only things that normally require human intelligence to do and are done in a certain way are intelligent.
The problem with this view is that you have to say what that "certain way" is.

Here are some candidates:

To be done intelligently, something must be:

(a) done computationally (which computations, then? any computations? if only some computations, which? and why those?) or
(b) done consciously (why? and how do we know it's done consciously?) or
(c) done by a conscious system (why does it have to be conscious?)
This all leaves the question of "What Is/Isn't Intelligent" a little open-ended.

Let's see if we do better with "What Is/Isn't Artificial?"

1.2 What Is/Isn't Artificial?

Here are a few candidates:
(i) Biological systems are natural; machines are artificial.
So a rabbit is natural and a toaster is artificial: But what if a toaster (or a computer) grew on trees? Would that make that very same system into another kind of a system, a natural one, even though it was functionally and structurally identical in all respects? (What kind of a difference is that!?)

Or if we succeeded in synthesising a rabbit, molecule by molecule: would that suddenly make an identical rabbit artificial?

So what is a "machine" anyway (apart from the fact that it happens to be man-made)?

(ii) A virtual system in a computer (e.g., a computer simulation of a chemical reaction, or a virtual-world simulation of a visible object) is artificial, whereas a chemical reaction or a physical object is real.
What is a robot, then? It is not a virtual object. Does that mean it is not artificial? Why does something have to be computational to be artificial? What is computation, anyway, and what is special about it?

1.3 What Is/Isn't Computation?

Here the answer is much more clear-cut, but it has some surprisingly similar features to the two previous questions:
(i) Computation is what mathematicians do when they "compute."
This sounds circular, but actually it's just the performance definition again: It's whatever mathematicians are doing when they do what they do. The rest is about trying to say what it is that they are actually doing when they are computing:
(ii-a) What mathematicians are doing when they compute is captured by the formal notion of (a) the Universal Turing Machine, (b) General Recursive Functions, (c) the Lambda Calculus, (d) Post/Kleene Machines (etc.), all of which turn out to be formally equivalent to one another. That is what computation is, and that is what mathematicians do. Anything mathematicians do will always be captured by these (equivalent) formal notions.
The above is also called the "Church-Turing Thesis," and it simply says that everything mathematicians have ever intuitively and practically meant by "computing" can be done by, say, a Turing Machine. So computation is what a Turing Machine does.

What does a Turing machine do?

(ii-b) Computation (i.e., what a Turing Machine does) is symbol-manipulation: formal symbols, arbitrary in shape (e.g., "0", "1") are "manipulated" (i.e., combined, recombined, written, erased, re-arranged) on the basis of formal rules
(algorithms, syntax) that operate on the symbols' shapes (e.g. "if you see a '0,' erase it and replace it by a '1'"), not on their meanings. Yet, if you have found the right symbols and manipulation-rules (the right algorithms), you can do remarkable things with them, so much so that the input symbols and the symbol-manipulations and the resulting output symbols will all be meaningfully interpretable.
For example, the symbols can be interpreted as quantities, standing for salary payrolls, or as the outcomes of scientific experiments, or as numerical calculations; they can even be interpreted as words and statements, true statements, about the world.

One important feature of computation is that the symbol-shapes (the notational system) doesn't matter: it is the rules and manipulations (algorithms) that matter, not the notation they happen to be formulated in. You could have used a completely different notational system to do exactly the same computation.

This is also the basis of the software/hardware distinction. It is the programme that matters for a computation, not the hardware details. (The programme of course has to be implemented on some hardware, even if it's just through someone doing the calculations by hand, but the computation itself is independent of those hardware details: radically different hardwares could have done exactly the same computation: the only thing they would all have in common would be the programme.)

This does give us a good definition of what is and is not computation: Anything that is what it is purely because it is executing a certain symbol-manipulating algorithm, and not because it is a certain physical system (obeying a set of differential equations), is computation. Anything else -- anything more or less than this -- is not computation (or not just computation)

Sample Question: All nontrivial computer programmes do something that is "intelligent." What distinguishes AI from the rest of computer science? Discuss principles and give examples.

2. Intelligence

2.1 IQ tests measure it
The concept of intelligence -- the intuitive idea and the everyday observation -- that some people are "smarter" than others -- is old. The idea that you can measure how intelligent people are is newer. Intelligence tests are designed to give higher scores to those people we consider more intelligent and lower scores to those we consider less intelligent.

To design such tests, we of course have to have some prior way of telling who is more and who is less intelligent. Than whatever that prior way is -- let us call it the "criterion" -- the questions in the test are picked so that those who can answer more of them correctly are more intelligent, according to the criterion. An example of a criterion might be how well people do in schoolwork, or in later work in life; or, for children, it might be that, say, a 9-year-old who can do the kinds of things the average 11-year-old can do is smarter than a 9-year-old who can only do the kinds of things a 7-year-old can do (the so-called "IQ" or "Intelligence Quotient, the ratio of the mental age to the real age). (There are of course problems with all these criteria, but at least they are criteria, and they can be used to pick out which questions we want to include in the test, because they correlate positively with the criterion, and which we want to throw out, because they do not correlate with the criterion.)

But the trouble with intelligence-testing is that it tells you who has more of it and who has less of it, but it does not tell you what intelligence itself is. For more about IQ and the factors underlying it, see: Arthur R. Jensen (1999) Precis of :. Psycoloquy: 10(023) Intelligence g Factor (1)

2.2 Intelligence is as intelligence does
We don't know what intelligence itself is, but we do know it when we see it (and we can pretty much tell when there is more of it or less of it): How? By what it can and can't do. In general, the one we consider smarter is the one who can do more (of a certain kind of thing: see the Jensen paper for the difference between general and specific skills).

But IQ tests are just about intelligence differences. What about just plain intelligence itself? What about what it is that all normal people share, whether they have higher IQ or lower? Let's call that generic human intelligence -- and whatever it is that makes all normal people able to do the kinds of things all normal people can do, that's intelligence.

So the question then becomes "what is it that makes a system able to do the kinds of things normal people can do?" AI is meant to provide the answer to that question. Let us call this the "How?" question.

2.3 Individual differences
Individual differences in intelligence -- the kinds of things some people can do and others can't, or some people can do better than others -- might possibly give us clues about the answer to AI's How? question: Maybe there are "modules" in intelligence, so that it consists of a set of independent abilities (but Jensen's work suggests otherwise; that although there do exists some specific skills -- musical, spatial, mathematical, verbal -- most of intelligence co-varies as one general "g" factor).

Yet it seems clear that no matter how long or hard we measure differences in intelligence, that will not in itself answer the How? question.

2.4 Species differences
Besides individual differences between human beings in what they can and cannot do, there are also differences between species -- and not just sensory differences (some species see things better than we can, or even have different senses, such as sonar) or motor differences (some species can fly and swim) but also "cognitive" differences (memory, spatial analysis, etc.).

But it is not the species that can do more than we can do that might be relevant here, but the species that can do less: After all, out abilities evolved out of their abilities. Maybe we should try to model animal intelligence (animal AI) first, before trying the harder task of modelling human intelligence?

In some ways this might be easier, except for one possible problem: No other species than our own seems to have language, and that seems to be at the heart of our own intelligence.

2.5 Generic intelligence (g)
So after all, perhaps we have no choice but to take on the task of modelling generic human ability (Jensen's "g" factor). But we clearly cannot start at the top. We have to start with "toy" fragments of our total ability (as AI has done, with chess-playing, scene analysis, problem-solving) and then try to "scale up" to our total generic capacity.

It is some of the dead ends that AI has run into in trying to scale up to our total generic capacity that will prove to be illuminating, in trying to work out what the best path for AI will be.

Sample Question: What questions can AI answer that IQ testing cannot, and how? Discuss concepts with examples.

3. The Sciences of the Artificial

3.1 What is/isn't a machine? There is a temptation to say that "artificial intelligence" is about what machines can do, not what people can do. That the answer to the How? question for a machine differers from the answer to the How? question for a human. This has even given rise to two kinds of AI: One kind dedicated to designing machines to do useful, smart things (for people) and the other dedicated to "reverse engineering" the way people do what they do.

There really is a difference between these two forms of engineering intelligence (one forward and one reverse), but we still have to ask what a "machine" is? Because if we cannot say what a machine is, then there is no difference between the two kinds of AI, except in motivation.

3.2 Natural vs. "man-made"?
Could the difference between machine and non-machine be something as trivial as whether it happens to be "natural" or "man-made"? That would not be a very deep difference then. For the very same device that was designed by nature would then become a machine if we managed to build one too.  
3.3 What if toasters grew on trees?
And the reverse is true too: A toaster is a machine, because we built it. But if we discovered that toasters grew on trees somewhere, would that then suddenly make the very same device a non-machine? Man-made vs. natural sounds like too arbitrary a distinction.  
3.4 Mechanism
What we really have in mind when we talk about "machines" is mechanisms: When we build something, we (usually) know how it works: we understand its mechanism. That's why we call it a machine. It's mechanical, and we know that, because we built it.

With natural things, we often do not know how they work, so we suppose they are some other kind of thing. But don't natural things have mechanisms too?

3.5 Causality
If we look at what we mean by "mechanism" and "mechanical explanation" closely, we see that all we mean is something that works according to known, understandable causal principles. This includes man-made machines as well as physical systems (such as balls rolling down hills, or planetary motion, or molecular processes).

So when we raise the How? question about intelligence, all we are asking for is a causal explanation of how the systems we call "intelligent" (such as ourselves) are able to do what they are able to do. The answer to the How? question is a causal system that can do what they can do.

(Obviously, we have to understand the system, e.g., because we designed it. Just pointing to another system as an explanation, because it can do everything we can do, is not an explanation unless we understand how that other system does it! So building one while sleep-walking isn't enough: we need to understand the causal processes involved.)

3.6 Determinism
Causality includes probability, by the way. So even if the mechanism is a probabilistic one, it is still a causal mechanism. The only thing we have to avoid in our mechanism is non-physical causes. There is no room for "mental" causes in an AI explanation. That would be telekinesis.

Sample Question: In what ways is AI like the design and study of artificial organs in biomedicine, and in what ways is it not? Discuss general principles and specific examples.

4. Computation is Formal Symbol Manipulation

For a full definition of computation see:

Harnad, S. (1994) Computation Is Just Interpretable Symbol Manipulation: Cognition Isn't. Special Issue on "What Is Computation" Minds and Machines 4:379-390

4.1 Turing Machines, Symbol Systems A Turing Machine is a symbol manipulator: It gets input symbols (e.g. 0's and 1's) and, based on the state (shape) that it's in, and the rules it is built to apply, it manipulates those symbols (e.g., by writing some more symbols as output).

The important thing to remember is that the symbols are manipulated on the basis of their shapes, not their meanings. The shapes of symbols are arbitrary, and they can be taken to mean anything. (There is no resemblance or causal connection between the shape of the symbol "apple" and those round red things that it refers to.)

4.2 Symbol Manipulation Rules (Algorithms) The symbol shapes are manipulated according to rules (algorithms) that are like recipes. Examples would be the formula for extracting the roots of quadratic equations, or even just the rules for long addition, multiplication or division.

The important thing is again that these rules apply mechanically, that is mindlessly, and are only formal, being based on the shapes of the symbols, not what they mean.

4.3 Semantic Interpretability Yet although the symbol manipulation rules are only based on the shapes of the symbols, not their meaning, what is remarkable (and useful) about them is that the results are nevertheless meaningful!  Computation is rule-based symbol-shape manipulation, but it is semantically interpretable -- it all makes sense (unless the symbol system is a trivial one, with no meaningful interpretation).  
4.4 Implementation-independence
The other important thing about symbol systems is that they are independent of their physical implementation (this is also the basis of the software/hardware distinction): This does not mean that there can be computation with no physical implementation at all! Symbol systems must be physically implemented in order to do anything. But the physical details of the physical system that implements them are irrelevant: An infinite number of radically different phsyical systems could have implemented the very same computation.

To put it another way: Every implemented symbol system is a dynamical system, but its dynamics are irrelevant to the computation it is performing:

Examples of implementation-independent symbol systems are arithmetic, logic, chess and language. Examples of implementation-dependent dynamical systems are motion, heat, sensorimotor transduction, and turbulence.

This is just to remind you that not everything is computation (i.e., sometimes the phsyical dynamics are central to what a system is and does).

Sample Question: What Is and Is not Computation? Say why and how, with examples.

5. The Turing Test: AI's IQ Test

For the Turing Test (TT), see Turing's original paper: Turing, A. M. (1950) Computing Machinery and Intelligence. Mind 49:433-460 and also Harnad, S. (1992) The Turing Test Is Not A Trick: Turing Indistinguishability Is A Scientific Criterion. SIGART
Bulletin 3(4) (October 1992) pp. 9 - 10. 5.1 Performance capacity: If it looks like a duck, walks like a duck, quacks like a duck If "Intelligence Is as Intelligence Does" then it is based on performance capacity. As soon as AI has designed a system that can do everything we can do, indistiguishably from the way we do it, AI will have a system that (1) passes the TT and (2) answers the "How?" question.

The essence of Turing's insight is: We are not mind-readers, not even with one another. The only way you can tell that anyone else has a mind (intelligence) is by what they do; you cannot go into their minds and make sure. So if any system can do what we can do, it has a mind too.

5.2 The penpal (symbolic) Turing test Turing original Test was the "pen-pal" version. The system must be indistinguishable to real penpals, from real penpals.

This version of the TT has some problems:

(1)  The penpal TT is not total. We have many more capacities besides our penpal capacities (although our penpal capacities are pretty central, being based on language).

(2) The penpal TT could in principle be passed by computation alone (a symbol system).
If so, then it is open to a famous counter-argument by Searle: Since a computer programme is implementation-independent, Searle could himself execute all the code for passing the TT without understanding a word of what his penpal was talking about (e.g., if the TT was conducted in Chinese). This is not what we mean by intelligence.

(3) Combining (1) and (2): If you included a photograph (or any other physical object) with your letter to your penpal, the symbol system could not discuss it with you, as a normal penpal could (unless you also described the photo in words). This would easily be detected as failing the Turing Test. Maybe this is why Searle's argument works too: Because implementation-independent symbol manipulation is not enough to pass the TT: The TT-passing system needs some noncomputational (dynamical) capacities too, especially sensorimotor capacity.

5.3 The robotic Turing Test
So maybe the penpal TT is not strong enough, and what Turing meant was the robotic TT. That version is immune to Searle's counterargument (because it is not implementation-independent, so Searle cannot "read its mind" by "becoming" the system himself). But, besides being immune to Searle, the robotic TT can no longer be passed by just computation alone: It necessarily requires a hybrid symbolic/sensorimotor system.  
5.4 The Loebner Prize
The Loebner Prize is an annual event in which computer programmes compete to pass the TT. The trouble is that it only lasts a short while, where as passing the TT requires a lifetime capacity. And it only involves fooling a few judges, whereas passing the TT requires lifelong indistinguishability, to anyone and everyone. In your view, what, if anything, does winning the Loebner Prize show? 5.5 AI as the reverse-engineering of intelligence AI's goal is the reverse-engineering of intelligence: Designing systems that can do more and more of what people can do, until they are no longer distinguishable, and hence pass the TT. It might make sense to start with modules of human capacity, or even with animal capacity, but it must scale up to more than an evening of Loebner-Prize performance.

Sample Question: What is the difference between the symbolic and robotic Turing Test and why is it important? Discuss principles with examples.

6. "Strong" & "Weak" AI

6.1.  Computationalism

Module 5 explained what computation is: It is (1) implementation-independent, (2) syntactic rule-based, (3) semantically interpretable, (4) symbol manipulation.

Symbols (4) are just arbitrary objects. It doesn't matter what the object, or its "shape" is, because any other object could have been used. The choice of object is just a convention we agree to use, a shared notational system, like agreeing to speak English or Chinese. The "shape" of the word we use to stand for something has nothing to do with its meaning. (What a word such as WORD looks or sounds like has nothing to do with what it means -- it means what we mean by "word," just as RED means what we mean by "red." The words do not "resemble," nor are they physically connected to what they mean in any way.)

Implementation-independence (1) is related to the arbitrariness of the shape of the symbols we choose to use. A computation is the same computation no matter what programming language you write it in. It is also the same computation no matter what hardware you run it on.

The symbol manipulations are based on "syntactic rules" (algorithms, programmes)  (2) which operate only on the shapes of the symbols, not on their meanings. The best thing to keep in mind here is a Turing Machine: A "0" [or any other arbitrary symbol] appears on its reading head. The machine is in a certain "state" at the time, and let us say that the state [which is the implementation of the rule] is the following: "If you read a "0" while in this state, erase the "0", write a "1" and move to the next symbol on your reader."

That's symbol manipulation, based on symbol-shape, and not on symbol meaning.

6.2 Searle's Chinese Room

See Harnad, S. (2001) for a discussion of what's wrong and right About Searle's Chinese Room Argument. "Weak AI" is just using computers to try to model anything, including intelligence. "Strong AI" (or "computationalism") is the theory that intelligence is computation, and can be implemented, not just simulated, by computation alone.
6.3 Symbolic AI and Robotic AI
In some ways, there is a split between symbolic and robotic AI, and it is along the lines of the split between the symbolic (penpal) and robotic TT. Symbol systems are good at some  things (calculation, reasoning, problem solving), but not so good at others (sensorimotor activity, learning. Hybrid systems seem to be the optimal choice.
6.4 Know-how and Know-that
Another natural split is between skills and "knowledge": Skills tend to be procedural and sensorimotor (i.e., robotic) whereas knowledge tends to be proposition, factual (i.e. symbolic).
Knowledge alone is ungrounded; it is just a lot of symbols unless it is based in sensorimotor skills.

Consider the photo you enclose with your letter to your penpal: A picture is worth not only 1000 words (symbols) but an infinite number. You can describe faces in words will doomsday, it still won't substitute for the sensorimotor skill of being able to see and recognize them.

And what's in a name (of anything) if it is not grounded in that sensorimotor know-how?

6.5 Knowledge, Learning and TT-capacity
An intimate part of knowledge is knowledge-acquisition, and that too is a skill: We learn to recognise faces through sensorimotor experience. Moreover, learning is an essential part of the robotic TT: Many of our sensorimotor skills are learnt rather than inborn, and the most fundamental of them all is categorisation: Our capacity to learn to sort and label the things in the world. Any system that could not do that would fail the TT from the outset.

Sample Question: If a computer alone could pass the penpal Turing Test, would it or would it not have it have a mind? Give reasons and evidence in support of your answer.

7. Learning

7.1 Induction vs. Deduction Induction is going from the particular to the general and deduction is going from the general to the particular. When you do a deduction (or inference) you are applying a rule that you already have to the case in hand. When you are doing induction you are looking from case to case, in search of the rule.

Our brains are remarkably powerful at induction. No man-made system yet comes close to the learning capacities of the human brain, but as we scale up toward the full robotic TT, we will have to model this powerful capacity.

Note that mathematics is deductive, whereas science is inductive: What then, is AI? Is AI an experimental science, to find out what can and cannot be done with certain symbol systems, or is it a mathematical one, to prove what can and cannot be done with certain symbol systems? Is an AI programme more like an experiment or a proof? What about a robot? Is reverse engineering more inductive than forward engineering? Is real robotics more inductive than virtual robotics?

7.2 The Credit/Blame Assignment Problem
Learning usually consists of trial and error, with corrective feedback. A child may be learning what a bear is, and the its mother points to pictures saying "bear" and "monkey," etc. Then the child is asked "Is this a bear" and mother tells it "yes, this is a bear" or "no, this is a monkey."

Inside the child's brain, a learning mechanism is learning to detect the right features and apply the right rules. Supposing the child does well for a while, then it gets one wrong: What has it done wrong? Which of its features and rules does it need to change? This is the "blame assignment problem." If instead, the child is starts doing well, which of its features and rules should get the credit.

Sometimes the child will overgeneralize a rule: "If it's brown and furry, it's a bear" (this is not necessarily a conscious rule). Then it's shown a polar bear and gets it wrong: back to the drawing board, but what was wrong?

With only a few features this is not so hard, but where there is a huge number of potential features and rules, this problem can become a very hard one. And it is the problem faced by any system that hopes to scale up to human-scale learning capacity.

7.3 The Frame Problem
The Frame problem is one that is faced repeatedly by symbolic AI: Whenever a computer programme tries to encode "knowledge" ("know-that") in advance, it tends to perform well until it reaches a point where the knowledge breaks down -- and breaks down so catastrophically that one cannot help but doubt that there was any knowledge there at all. A programme may do a fine job of understanding and explaining scenes involving phones, explaining why and how they are used, etc., when given event after event to describe, explain, and answer questions about (TT-style). But then you happen to ask it" "What happens to the phone after the user leaves the room?" No reply. Or worse, it answers "The phone ceases to exist."

This is an instance of the frame problem. It is usually described  as the problem of knowing what does and does not stay constant (invariant) when there is a change. The solution is usually to try to add the new "fact" to the symbol system's "knowledge" -- but that can keep going on forever. And remember (lest you think we ourselves have frame problems too) that when the system falls into the frame problem, it is not just slightly wrong, it is radically wrong, so wrong that all prior bets are off and it's not clear that it ever really "knew" anything at all.

Could the frame problem be a variant of the credit/blame assignment problem, but a radical one because the system is not learning, online, but meant to already "have" the knowledge in its head?
So could the frame problem be another symptom of the problem of trying to do AI with symbols only? Because of course learning is mostly sensorimotor and nonsymbolic.

7.4 The Symbol Grounding Problem
The symbol grounding problem concerns the connection between symbols and what they mean. In and of themselves, symbols mean nothing. If you went to a Chinese/Chinese dictionary not knowing any Chinese and you wanted to look up what any word meant, it would all be there, but it wouldn't do you any good, because you'd just keep going from one meaningless definition to another, never coming to rest on anything other than meaningless symbols.

How is it different in our heads? How come the symbols in our minds mean something? Perhaps it's because some of them (not all, just some) are not just defined in terms of still further symbols (that would go on forever without reaching a meaning), but because they are instead connected to the things they stand for by the sensorimotor mechanisms that detect and recognize those things?

In other words, symbolic know-that needs to be grounded in sensorimotor know-how. And much of that sensorimotor know-how comes from sensorimotor learning. (After that, once you have grounded a basic vocabulary, the rest could all be gotten by combining and recombining the symbols into higher-order categories, the way dictionary definitions too -- but probably even there, all those abstractions need to be "refreshed" by some direct sensorimotor connections now and then.)

So perhaps the frame problem is a symptom of the symbol grounding problem; and perhaps that's why Searle's argument works too.

7.5 Algorithms
Algorithms are rules for manipulating symbols. They are remarkable and powerful. But to generate intelligence, some, at least, of the symbols must be connected to something other than just more symbols (as in a definition): there must be a direct sensorimotor connection, as in situated robotics.

Once a system is grounded, it inherits all the power of the Turing Machine and computation, and the rest can be done with just symbols and algorithms.

7.6 Problem-solving
Once a system is grounded, algorithms are the natural ally in reasoning and problem-solving. But those algorithms need to come from somewhere; so a learning mechanism is still needed.

Sample Question: Is AI just a matter of finding the right algorithms for intelligent performance capacity? If yes, say why; if no, say why not. Give supporting arguments and examples.

8. Neural Nets

8.1 Parallel, distributed networks of interconnected units with (learning) rules for changing in connectivity f(I/O)
An alternative to symbol systems is parallel distributed systems (neural nets) that change the strength of their interconnections with experience. Such systems have turned out to have powerful learning capacities: Could they be the critical component needed to ground symbols in the objects they stand for, via sensorimotor connections? 8.2 Are nets symbol systems SS?
Even if nets are critical to grounding, do they have to be dynamical systems? Could they not be implementation-independent symbol systems too? Sensorimotor transduction has to be dynamic, but do nets?  
8.3 (SS hardware? simulated? Trained?)
Are nets merely an alternative hardware for implementing symbol systems (because then they are irrelevant, like all implementational details)? Or can they be simulated symbolically? (Because then it is irrelevant that they are nets: they are just another symbolic algorithm.) Can nets be trained to be symbol systems? The symbols vs. nets question is not a straightforward one.  
8.4 Hybrid systems
Regardless of whether nets themselves are symbolic or dynamic, hybrid systems seem to be the right way to go if the robotic TT is AI's goal, because sensorimotor capacity is inescapably dynamic.

Sample Question: Are hybrid systems relevant only to AI as reverse engineering, or as forwarding engineering too? Give reasons and examples.