People tend to be of two minds (pun intended) on the issue of mental content. On the one hand no one can dispute that the way we talk about the mind is largely figurative. The mind is racing and wandering, it has things on it and in it, it is sometimes full and sometimes empty, it is open and narrow and dirty and right. We are used to talking this way, it is useful to talk this way (I don’t think there is anything wrong with our psychological talk), and everybody pretty much understands that this is a discourse full of “figures of speech.” The philosophically-inclined see well enough that “mind” is an abstract concept of some sort. On the other hand we have deeply internalized some of this figurative language, so deeply that one of the most central, perennial problems of epistemology is the alleged problem about the relation of our “inner” perceptions of the world to the “real world” out there, outside of our heads. Many people think that we are stuck inside our heads: a blatant conflation of the literal with the figurative.
Why is this? For one thing when we talk about the mental we must use the language that we have, and this is a language evolved for talking about the physical, “external” world of three-dimensional objects in three-dimensional space. The room has an inside and an outside, and there are things (concrete things) inside it (those chairs and tables that philosophers are always talking about). “Beliefs” and “sensations” are words that take the same noun-role as “chairs” and “tables,” and thus the grammar of the language is constantly pushing us to conceive of these mental terms as referring to some variety of concrete things. This is the sense in which Wittgenstein uses the word “grammar”: to indicate the way that language contains metaphysical suggestions that can lead to confusion. The metaphysical grammar of language is the grammar of three-dimensional objects in three-dimensional space; objects, moreover, that interact with each other according to regularities of cause and effect.
A basic confusion about the mind is that it is a kind of inner space filled with things and (non-physical) processes. It is important to see the close relationship between this pseudo-spatial conception of the mind and the problem of mental representation. Physical things and processes don’t mean anything (or, physical descriptions and explanations of the things and processes in the world don’t refer to the semantic property, only to physical properties). The concept of a symbol is essentially relational: symbols need to be interpreted. For interpretation to happen there must be an interpreter. Pictures, books and computer screens need to be looked at by someone – someone with a mind. Thus the representational model has a “homunculus” problem: in order for the symbol to work it must be read by someone, as streetlights and recipes only “work” when actual people respond to them with appropriate actions. Another way of putting the problem is the “regress” objection: if the theory is that minds work using representations, then the homunculus’s mind must work that way as well, but in that case the homunculus’s mind must contain another homunculus, and so on.
Some cognitive scientists have tried to overcome this objection by suggesting that a larger neural system of cognition can be modeled as responding to information from neural subsystems without succumbing to the homunculus fallacy, but this strategy can’t work if a “representational” theory of mind is one that posits representations as necessary for thought. A theory of mind that succeeds in naturalizing psychology will be one that shows how the “mental” emerges from the non-mental. Any theory that includes anything mental in the first place accomplishes nothing. The concept of a representation is a mental concept by definition: the verb “to represent” presumes the existence of an audience. Representation, like language, cannot be a necessary precondition for thought for the simple enough reason that thought is a necessary precondition for both representation and language (a being without thoughts would have precious little to talk about!). This is not a chicken-and-the-egg question.
There is an important discussion here with the computationalists, who think that the mind/brain is a kind of computer. If it is the representations that bear logical relations to one another (the computationalist argues), and rationality consists in understanding and respecting those relations, then rationality requires a representational (typically thought of as some sort of linguistic) architecture. If computation is formal rule-governed symbol manipulation then symbols are necessary for computation/cognition. Jerry Fodor, for example, hopes to bridge mind (intentional explanation) and body (physical explanation) by way of syntax, the formal organization of language. The idea is that all of the causal work that would normally be attributed to the content of the representation (say, the desire for water) can be explained instead by appeal to “formal” (syntactic, algorithmic) features of the representation (there is some more discussion of Fodor below).
One challenge to this computationalist (or “strong AI”) view is connectionism, the view that the mind/brain has an architecture more like a connectionist computer (also called parallel distributed processing, PDP; in the wetware literature this is the “neural nets” discussion). In connectionist computing, systems of nodes stimulate each other with electrical connections. There is an input layer where nodes are activated by operators or sensors, programming layers where patterns from the input layer can be used to refine the output, and the output layer of nodes. These connections can be “weighted” by programmers to steer the machine in the right direction. Some of these systems were developed by the military to train sonar systems to recognize underwater mines, for example, but they are now ubiquitous as the face-, handwriting- and voice-recognition programs used in daily life.
Connectionist machines are very interesting for purposes of the present discussion. They appear to be self-teaching, and they appear to function without anything that functions as a symbol. There is still the (human) programmer and there is still nothing that seems like real consciousness, but such a system attached to a set of utilities (so far, the utilities of the programmers) looks to be effective at producing organized behavior and fully explicable in operational terms.
Meanwhile, I’m not even sure that computers have representations in the first place. That is, it’s hard to see anything that functions as a representation for the computer (which is not surprising since it doesn’t look like the computer has a point of view). What makes computers interesting to cognitive science in the first place is that with them we can tell the whole causal story without appeal to representations: the binary code just symbolizes (to us) the machine state (the status of gates in the microprocessors), and we can sustain the machine-level explanation through the description of the programming languages and the outputs. Those “outputs,” of course, are words and images interpreted by humans (mostly). So even “classical” computers do have computational properties and do not have representations. Or perhaps another way to put it is that two senses of “representation” are confuted here: the sense when a human observes a computational process and explains it by saying: “See, that functions as a representation in that process” and the sense when a human claims to interpret a representation. (I will discuss computational properties as “formal” properties in the discussion of the problem of rationality below.)
The computationalist/connectionist discussion is a striking example of how little the larger discussion has changed since the 17th century. It is the rationalist/empiricist, nativist/behaviorist argument rehearsing itself yet again through the development of this technology.
Showing posts with label connectionism. Show all posts
Showing posts with label connectionism. Show all posts
Sunday, November 14, 2010
Subscribe to:
Posts (Atom)