THE NEURONAL PLATONIST: Michael Gazzaniga in conversation with Shaun Gallagher

Introduction: Plasticity, the Interpreter, and the Self

Psychology is dead. The self is a fiction invented by the brain. Brain plasticity isn't all it's cracked up to be. Our conscious learning is an observation post factum, a recollection of something already accomplished by the brain. We don't learn to speak; speech is generated when the brain is ready to say something. False memories are more prevalent than one might think, and they aren't all that bad. We think we're in charge of our lives, but actually we are not. On top of all this, the common belief that reading to a young child will make her brain more attuned to reading is simply untrue.

These are theses that Michael Gazzaniga defends in his book The Mind's Past (1998), a report on some recent research in brain science. Although Gazzaniga's book is not without rhetorical and dramatic flourishes, his conclusions are based on hard science, much of it conducted in connection with his own research into the experience of split-brain patients. In the interview that follows we discussed a number of these issues.

For example, Gazzaniga takes a controversial position with regard to the issue of brain plasticity. Some neuroscientists place a great deal of emphasis on plasticity and they appeal to good empirical evidence to justify that emphasis. Michael Merzenich and his colleagues (1984; 1987), for example, have demonstrated cortical reorganization of the somatosensory body map following amputation of fingers in owl monkeys. Following amputation, the brain area that had originally represented the pre-amputation digits was gradually invaded by neuronal growth representing other remaining digits. Similar cortical reorganization has been shown to be associated with phantom limbs in humans (Ramachandran et al., 1992). Carla Shatz and others have extended principles of experience-dependent plasticity to brain development, demonstrating that neural activation is a necessary condition for the proper development of the brain. Shatz concludes that 'the maturing nervous system can be modified and fine-tuned by experience itself' (1992, p. 41).

Gazzaniga cannot and does not deny the empirical evidence. The question, however, is how to interpret it. One interpretation emphasizes the role of experience in the formation of the brain. An extreme version of this theory would equate the brain with an empiricist tabula rasa waiting for experience to shape its different mechanisms. On this view, represented by Sejnowski's concept of 'neural constructivism', even ordinary learning involves a high degree of plasticity (Gazzaniga, 1998, p. 13). Even a more moderate version of this position would support the idea that reading to your children will tune their neurons for improved reading capability. Another interpretation emphasizes genetic determination. The extreme version of this theory would claim that neural mechanisms are entirely hard-wired, brain development having been guided according to strict genetic plans. If this is right, reading to your child is very nice, but it doesn't do anything for your child's brain.

Now most scientists are in the middle somewhere. Some are closer to the first empiricist pole; others are closer to the second, nativist pole. Gazzaniga belongs to this latter group. Although it is true that normal brain development requires certain kinds of experience, this does not imply that it is not determined by genetic factors. If, for example, the normal development of the visual cortex requires visual experience during certain critical periods, this is because the human brain has evolved within particular sorts of visual environments. Since genetic factors originate in an evolutionary process, the genetically determined sequence of developmental steps take place with built-in expectations about what sorts of signals will be coming from the environment. If the environment is abnormal (think of Hubel's and Weisel's experiments with cats raised in darkness or in surroundings that contain only horizontal lines [e.g. Wiesel, 1982; Wiesel and Hubel, 1963]) that will have some influence on brain development. But this does not mean that development is driven by environmental factors. Gazzaniga maintains that development is genetically driven.

The broad scaffolding of the brain is built by genetic mechanisms, which also control almost independently the specifications of what connects to what, but the details of cortical arrangements might be left to experiential effects. However, the so-called experiential effects are merely brain activity, not necessarily encoded information from the environment (1998, p. 46).

Plasticity is not nonexistent, but it is not the principle of development. Most development is self-contained, genetically determined development.

But why should plasticity exist at all? Gazzaniga follows the suggestion of Wolf Singer (1987; 1995), that some brain systems have to remain somewhat adaptive during early development to allow for changes in the physical size of the body, for example, the size of the eyeball. But this sort of plasticity is limited and fully within the boundaries set by genetic development. Abnormal cases (genetic blindness, phantom limb, etc.) may involve unusual degrees of plasticity, but this sort of plasticity is abnormal rather than a developmental principle.

If brain plasticity is not a principle of development, it still may be productive to speak of a more general plasticity belonging to the organism as a whole ' something that is the result of an already highly developed brain. Gazzaniga (1992) refers to a 'functional plasticity' that can account for human variation and uniqueness. 'Humans have a greater functional plasticity than other species, and appear to have repertoires of responses that go beyond the simple variability that allows species to adapt to changed environments through selection' (1992, p. 112). To provide an account of this complex human capacity for unique responses and adaptations Gazzaniga develops a cognitive model based on what he calls 'the interpreter'. The interpreter is a capacity, or a set of mechanisms, that resides in the left hemisphere of human brains. Its function is such that it enables us to make inferences and to form beliefs about both interior events (internal bodily states) and external events, including our own actions and the actions of others. As Gazzaniga indicates, this is where evolution outsmarts itself. The interpreter frees us from the immediacy of the present environment; it allows us to remember and to plan, and to do so in inventive and unique ways.

The notion of the interpreter is consistent with theory of mind approaches in developmental psychology. The child's theory of mind, and her ability to recognize others as having intentions different from her own, depends on an interpretive capacity that emerges gradually over infancy and early childhood. Gazzaniga, however, first worked out the theory of the interpreter and its left-hemisphere location in his studies of split-brain patients. If at first the view had been that various cognitive processes were bilaterally represented, so that a splitting of the brain (severing the corpus callosum) produced a sort of double consciousness, the discovery of the modular organization of the brain modified that view. To the extent that brain modules operate in large measure with information that remains distributed and outside the realm of consciousness, some mechanism is required to explain how consciousness can make sense out of the incomplete and less than cohesive information that comes its way. Laboratory experiments with split-brain patients, performed by Gazzaniga and his colleagues, suggest that this mechanism is located in the left-hemisphere.

In the case of a split brain, the left-hemisphere, when presented with information that is properly contextualized only by what is presented to the right-hemisphere, will recontextualize and make good sense of that information in terms of a scene presented to the left-hemisphere. In other words, the left hemisphere will inventively integrate the aberrant information into a reasonable story consistent with the left-hemisphere context. In one experiment the left-hemisphere is visually presented with a picture of chicken claw; the right-hemisphere is visually presented with a scene involving snow. The patient is then asked to pick out from an array of pictures images that are associated with the presented pictures. As expected, with his right hand (which is controlled by the left hemisphere) the patient points to a chicken, and, with his left hand, to a snow shovel, consistent with the snow scene presented to the right hemisphere. The patient is then asked to explain why he chose these items. His response: 'Oh, that's simple. The chicken claw goes with the chicken, and you need a shovel to clean out the chicken shed' (Gazzaniga, 1998, p. 25). The reasonable interpretation which makes sense out of a piece of information that has its proper context registered in the right-hemisphere (the shovel) is fully generated in the left-hemisphere. Other experiments show that emotions generated by right-hemisphere presentations, are interpreted in rational frameworks generated by the left-hemisphere.

In normal, non-split brains, of course, the contextualized information from the right-hemisphere is delivered to the interpreting mechanism in the left-hemisphere across the nerve fibres of the corpus callosum. The interpreter integrates that information and generates rational coherence vis-à-vis other aspects of current and remembered experience. What we find, then, in the left-hemisphere, is what the ancient Greeks would have called the rational part of the soul. Gazzaniga, in agreement with philosophers like Plato and Aristotle, indicates that this is what distinguishes the human animal from the subhuman animal: the capacity for rational invention.

The interpreter and its talent for rational invention are not only essential for making sense out of the perceptual world, but are also involved in making sense out of our own selves. We interpret our own behaviours, and once we begin to construct autobiographical accounts in a certain way, our inventiveness provides for an enhanced self-concept. The events of our lives are interwoven into a belief system generated in the left-hemisphere. If genetic factors take primary responsibility for brain development, if consciousness seems to be playing catch-up with what the brain already knows, and if this looks dismally deterministic, at least this one aspect of brain function, the interpreter, frees us from that determinism and allows for unique development on the basis of personal experience and culturally relative contexts. The individual self is the product of an inventive brain working within a matrix of unique perspectives provided by culturally structured experience.

In this process it is quite probable that certain biases or interpretational prejudices enter into the formation of self-concepts. Because we tend to interpret new information in light of previous experience and established categories, it might seem that our self-concept is relatively conservative. But this is not the complete story. Gazzaniga points out that the interpreter not only shapes current experience in light of past experience, it also constructs and reconstructs our past. But this is again a function that allows the human brain to transcend its automatic processes. The personal self which the mind discovers after the brain has already performed its interpretation, appears as an integrated and conservative entity, but this in fact is the result of the sub-personal inventive process. In this sense the self is invented, not by phenomenological construction, but by a factory-installed mechanism designed by nature to transcend its own material determinations. The self is not self-constituting but is the outcome of a self-organizing neuronal process that creatively incorporates biases, and even errors, in perception, memory and judgment.

Again, Gazzaniga's work with split-brain patients helps to show that both the self and its past are constructed in brain processes, and not always in an accurate way. When the right hemisphere is presented with pictures related to a series of images previously presented, it correctly realizes that these are new and different pictures, but is incapable of generating a variant storyline that would integrate both the past and present images. The interpreter in the left hemisphere, however, can recast the original storyline to accommodate the new images. 'The interpreter-charged left brain remembers the gist of the story line and fills in the details by using logic, not real memories. The right brain, without an interpreter, regurgitates the literal story, not one embellished by the interpreter' (1998, p. 145). The interpreter adds or cuts details to provide for consistency, and in the process incorrectly remembers the new images as belonging to the original series. Such false memories are in the service of rationality. They help to reduce cognitive dissonance and guarantee some degree of self-integration.

At a recent conference in Tucson I had the pleasure of introducing Professor Gazzaniga's lecture in which he covered many of the fascinating themes found in his book. We were motivated to follow up that discussion in a format that allowed for some further considered reflection. The following interview is the result of an e-mail correspondence conducted over the course of several weeks in August of this year.

Interview

Shaun Gallagher: You are well known as one of the pioneers in the exploration of split-brain phenomena. I would like to start by asking about the notion of the self in this context. Philosophers such as Nagel have used split-brain subjects to argue for the possibility of more than one self to a body. What do you think of these arguments, and do you agree with the concept of self that seems implicit in them ' that is, that the self can be identified with the brain, or as Nagel puts it:

I could lose everything but my functioning brain and still be me . . . . If my brain meets these conditions then the core of the self ' what is essential to my existence ' is my functioning brain. . . . I am not just my brain. . . . But the brain is the only part of me whose destruction I could not possibly survive. The brain, but not the rest of the animal, is essential to the self (1986, p. 40).

Michael Gazzaniga: Does anyone really disagree with Nagel on that point? On the issue of one versus many selves, I would argue there is one self that can operate in many modes. I am a husband, father, scientist, skier, professor, etc. Each one of those modes calls upon different aspects of myself and each mode feels sort of different. There is no doubt that I am also capable of killing someone, if shot at. After all most soldiers are family men who are called upon to exercise an aspect of their self they rather not see.

All of this resides in the brain. Where else? In other words, the 'self' is the product of the workings of the brain. It may be somewhat misleading (or misguided) to discuss the 'self' as some unified, coherent entity except as the product of the operation of the brain.

Gallagher: If we accept that the self is intricately related to the brain ' possibly even reducible to the brain ' it is also the case that a variety of theorists have conceived of this relation in different ways. Ramachandran and Hirstein (1997), in a paper in an earlier instalment of this special issue, describe the self as a product of an executive mechanism, specifically a limbic executive rather than a frontal one. Nagel suggests that the whole brain is essential to the self. How do you conceive of this relation between brain and self?

Gazzaniga: It's not simple. The split brain work allowed us to ask the question, does each separated hemisphere have its own self? At one level each can remember, emote, cognate and execute decisions. At some level of reasoning, each must have its own self.

Over the years, however, it became clear that each hemisphere works at hugely different levels of understanding the world. The left hemisphere is completely self-aware, interprets its actions and feelings and those of the world. The right hemisphere doesn't do much of this and is an extremely poor problem-solver. In simple tests of self awareness, such as showing pictures of the patient or of family members, each hemisphere seems to indicate recognition. But is each equally aware? It is hard to say as the recognition displayed by the right hemisphere might simply be an associative response. It is, of course, very difficult to interrogate the right hemisphere about its level of 'self awareness' since it typically doesn't support spoken language.

But to answer Nagel's suggestion that the whole brain is necessary for the self more directly, it may be more reasonable to say that the self changes as a result of damage or disconnection and that less than the whole brain is capable of sustaining some sort of self. As you know, my proposal that the left hemisphere has an 'inter- preter' that is constantly evaluating the state of the body and mind, plays a role here.

Gallagher: Yes, perhaps you could say something more about the interpreter mechanism. If I understand it properly, it's a neuronal mechanism that monitors the subject's responses to the environment. But it monitors behaviour on many different levels ' cognitive as well as emotional, and as you say, it evaluates states of the body as well as mental states. Do you picture the interpreter as a complex neuronal location with inputs from all the relevant parts of the brain to track all of these dimensions, or is it a distributed process, albeit confined to the left hemisphere?

Gazzaniga: You said it all. It is the system that asks why and how and then tries to provide an answer. It is definitely in the left hemisphere but I can't be more specific.

Gallagher: In your most recent book, The Mind's Past, you make some very controversial claims. The first is that psychology as a science is dead. Despite the fact that there are still interesting questions to be answered about psychological processes, you think that psychology itself will not be able to answer these as well as neuro- science. Do you have a specific kind of psychology in mind, or do you still see a role for neuropsychology or for psychiatry? I think of the latter in view of the fact that there are still no viable neuroscientific models to explain psychopathologies such as schizophrenia or autism.

Gazzaniga: You surely have that right! That bit of rhetorical flourish about the death of psychology was to draw attention to what already has happened. Psychology, as a term, has been hi-jacked by the clinicians and counsellors and has little relationship to the activities of academic 'psychologists' and other mind scientists. That is fine. What is done now by people who study the mind is a vast array of activities that range from evolutionary theory to psychophysics, neuroscience, economics, anthropology and computer science, to mention a few.

I agree with John Tooby's nifty point that when you ask the question ' What is the brain for? ' it becomes clear it is to be understood in informational terms. The brain is there to make decisions about how to survive and reproduce. Once this is understood, the connection between the more traditional fields of neuroscience, cognitive science and experimental psychology and those of related fields like economics and anthropology become clear [see, e.g. Cosmides and Tooby, 1994; Tooby and Cosmides, 1990].

Universities are the last places on earth to find change. But in my opinion, we should be looking to formally dissolve many traditional academic departments and to form new ones. I like the idea of the Mind Sciences.

Gallagher: So your point is not the same as that made by Sigmund Koch (1993), which was perhaps a more radical claim, if I understand it, namely not only is psychology dead, but the various disciplines that are taking its place are not closely connected. At least the sciences that might take the name 'Mind Sciences' are all focused on connected questions?

Gazzaniga: It is an inclusive term at this point. If psychology meant what George Miller (Miller & Buckhout, 1973) meant when he called it the science of mental life, I would have no quibble with the term, psychology. But it doesn't, and meanwhile all these other related fields are beginning to worry and think about the nature of mind. So, yes, they are all connected. They want to know about how the brain enables mind and they assume the answers will come at the level of decision networks.

Gallagher: Again in your book The Mind's Past, you present a strong argument for genetic determinism, and you seem to reject the notion that the brain has a large degree of plasticity and that experience is essential for development. If that's the case, how do you explain the experimental data that indicate a large degree of brain plasticity, and that experience is necessary for proper development. I'm thinking of the famous work on critical periods in the visual system by Hubel and Wiesel (1963) and some of the more recent work by Carla Shatz (1992) and others which you discuss in your book. And you also cite the work by Merzenich (1984; 1987) on the owl monkey which showed that cortical representations of the body are subject to modification depending on the use of sensory pathways. Why doesn't this provide strong indication of the importance of experience for neuronal development?

Gazzaniga: There is no arguing there is some kind of plasticity mechanism in our brains. After all, we all learn things, like English, French, Japanese and our home telephone number. We can learn a new word in a flash, a new location in an instant. However, no one suggests that kind of learning is supported by the kinds of neural mechanisms you mention.

Shatz's beautiful work shows some activity-dependent development occurs way before the developing organism has any independent environmental experience. Surely that kind of dependency goes on, but I don't think there should be a wholesale importation of those concepts into how the brain gets built by psychological experience. There is a huge difference.

As for the extensive work by Merzenich and his colleagues, there can be little question that maps change as the result of experience. The question is, what does it mean? There are other results that I report in the book that show these changes may only occur when an alternative substrate is present for those changes to occur upon.

Gallagher: Part of your evolutionary, genetic view depends on the concept of neural modules, or as you put it, 'neural devices that are built into our brains at the factory' (p. xiii). One objection to this view comes from developmental psychology. For example, the idea that modular functions come 'on line' at certain critical moments in development has been questioned by Gopnik and Meltzoff (1997). Although they do not reject certain innate elements as important to development, they argue that at various times during development young children take up a completely inaccurate view of the world. In this case they wonder why evolution would have designed a sequence of incorrect modules rather than providing for the most direct and efficient course to maturity.

Gazzaniga: Does it really matter what we are like during transition from childhood to adulthood? Surely as brain development unfolds, the child is going through stages where interpretations of the world give rise to bizarre beliefs and perceptions. I don't see how that point argues against the idea many devices are built into our brains. Actually their bizarre beliefs serve up many of the endearments we all experience as parents.

Gallagher: Would such childhood beliefs and mis-perceptions be the result of a developmental process related to the left-hemisphere interpreter? Or does the interpreter come pre-programmed, so to speak, and is it simply not getting the complete information it needs? I'm trying to understand where you might draw the line between a nativist account of brain development and one that would place the emphasis on perceptual experience.

Gazzaniga: I think the interpreter is doing its job in the child. During those years when it occasionally comes up with phantasmagorical stories about life, the child's knowledge system is not yet prepared to reject the idea served up by the interpreter. So the child plays with the idea, and the parents are amused. Pretty soon, however, the child realizes the Christmas canoe could not have fit down the chimney.

Gallagher: To return to the notion of the self, in The Mind's Past you present the latest version of the theory of the self as an illusion or a fiction. Perhaps Hume's account of the self as a product of an overworked imagination is the earliest of these theories. More recently narrative theory has suggested that selves are products of the stories we tell about ourselves. Dennett (1991) has worked out a theory of the self as a centre of narrative gravity, that is, as an abstract construct located at the intersection of the various tales we tell about ourselves. How does your theory differ from these?

Gazzaniga: Well I think Dennett borrows heavily from the idea of the interpreter to generate that idea, so I am in general agreement with his formulation.

Gallagher: Even if the self is an illusion or fiction, would you agree that it plays an important practical role in our individual lives? I'm thinking of this question in relation to what you say about free will, that is, that the brain has already done the work before we become aware of it. How, in that case, are we able to talk about responsibility, or character, or ethics? One might be tempted to argue that 'I didn't really do it, because, after all, I don't really exist (I'm nothing more than a fiction); and even if I did do it, I didn't freely choose to do it.'

Gazzaniga: The self is not a fiction. It is the centre of our personal narrative, as Dennett says. By trying to articulate how that develops, how the brain enables that sense of self, I do not mean to say the self is a fiction. It is that which the interpreter creates and gives narrative to. Now in The Mind's Past the lead chapter is called 'The Fictional Self' but that was to draw attention to the fact the interpreter calls upon all kinds of false information to build that narrative. So the construct that is derived comes from true facts of ones' life as well as false facts, that we believe to be true. The resulting spin that comes out as our personal narrative is, as a result, a bit fictional, like the idea we are in control of our behaviour.

Having said that, I do not for a minute think this view relinquishes us from personal responsibility. I wrote about this at length in my book The Social Brain (1985). The late Donald M. MacKay made the argument over 30 years ago that even though we could view the brain as mechanical as clockwork, the idea of personal responsibility does not suffer (MacKay, 1967). His long argument was based on the idea that in order for something to be true, it had to be true and public for all people. So, if you are sure that I am going to eat a Big Mac at noon, all I have to do to show you cannot predict my behaviour is not eat it at noon. You respond, well what if I keep the prediction a secret? His argument was that the prediction might be true for you but that it wasn't true for the whole world. In order for that to occur, it had to be made public and the minute it was, I could negate it.

It is a clever argument which I tend to agree with. Nonetheless, that argument aside, it also is obvious to me that people behave better when they believe they are in charge of their own behaviour. That is good enough reason for me to support the concept.

Gallagher: Let me raise two things in response to this. First, it seems that on pragmatic grounds you endorse the idea of taking moral responsibility for one's actions, but on neuroscientific grounds you rule that possibility out by ruling out free will? Is that right?

Second, let's look at MacKay's argument. The only way I could predict with certainty that you were going to eat a Big Mac at noon is if I knew your current brain state and all of the inputs to the system that will come along between now and noon. Since right now I can't know all of the precise inputs that will occur between now and noon, I can't really predict anything about your action with a large degree of certainty. My prediction itself, if it is known to you, is another input that I would have to consider in my prediction. At that point we have an obvious paradox involving public predictions. Perhaps we could we get around that difficulty by placing the prediction in a sealed envelope in a public place. If we did that, there are still two possible outcomes. (1) My prediction might be right, and this might count as evidence for your lack of free will. (2) My prediction might be wrong, because I still might not know all of the social and environmental inputs ' you might get in an accident on the way to MacDonalds. The fact that my prediction is wrong does not show that you have free will, it shows that I don't have enough information.

Gazzaniga: Let me answer the second first. MacKay's point is that in order for your prediction to be true for everyone it has to made available to me as well. There is no such thing as a private set of Kepler's laws. They are true because they hold for one and all. That means you can have no secret predictions. And in MacKay's hands he jumps ahead of the current problem of knowing all the inputs, etc. He assumes someday brain scientist will know this sort of thing.

So, at this point I really can't find a problem with his analysis. His classic paper, Freedom of Action in a Mechanistic Universe (1967), spells out the issue in detail. And as a result, I still stand with the idea of the great importance of taking moral responsibility for one's actions.

Gallagher: In The Mind's Past you suggest that 'autobiography is hopelessly inventive' and that false memories can be productive for a coherent life narrative. Now philosophers often worry about self-deception and how it's possible. But your claims about autobiographical inventiveness do not imply self-deception, that is, there is no level on which the individual knows that their false memories are false. Yet, philosophically, this conception of self-inventiveness should motivate in us some suspicion about who we are. Should we worry about this in any way?

Gazzaniga: We sure should. That is exactly the problem with false memories. In terms of our personal recollection, we can't tell the difference between true and false memories. When complex scalp recordings are made, the experimenter cannot detect that the brain responds differently to a true as opposed to a false memory. In short, both are part of our fabric of our personal narrative.

Gallagher: I wonder if you would accept the name Platonic neuralism for your theoretical position. I have in mind your statement: 'Every newborn is armed with circuits that already compute information enabling the baby to function in the physical universe. The baby does not learn trigonometry, but knows it; does not learn how to distinguish figure from ground, but knows it; does not need to learn but knows, that when one object with mass hits another, it will move the object' (p. 2). On this view, the child is much as Plato describes in his theory of recollection. Remember in Plato's dialogue, the Meno, when Socrates sits down with the slave boy and shows that the boy doesn't learn geometry, but already knows it. On some level, and with regard to some basics, we don't actually learn anything new, we simply recollect what we already know from prior lives (and here we could substitute the evolutionary explanation for the notions of reincarnation that Plato discussed). The knowledge is already in the brain.

Gazzaniga: I think the work of several of our leading developmental psychologists point to this conclusion. Elizabeth Spelke, Susan Carey and Rene Baillargeon all argue the point convincingly [see e.g. Baillargeon et al., 1985; Xu & Carey, 1996]. New work that Carey is doing with Marc Hauser on the Rhesus monkey shows that there seem to be some mental primitives that are shared by both the monkey and the young child.

So yes, much of what we are is factory installed and maybe there is some from early in our evolutionary history.

Gallagher: Your book has the virtue of being both challenging and quite readable. Have you had any surprising responses to it?

Gazzaniga: One of the pleasures of our times is email and the internet. After The Mind's Past was published in May, 1998, I found that dozens upon dozen of terribly bright and interested people can easily find you and take the time to write for clarifications of this or that. One person in particular, Richard Saicho, wrote to comment on the book and then provided a wonderful poem he had written, inspired by its themes, entitled, 'There is an 'I' Within Me . . .' He says very well what I am trying to say.

Postscript on Free Will

Gazzaniga admits that the sense of free will is a valuable one for ethical reasons, yet one might think that his own work contributes to destroying people's faith in the existence of free will. Furthermore, many people have been critical of the ethical implications of the sort of genetic determinism that he seems to be promoting. Gazzaniga is nonetheless insistent about the importance of moral responsibility. This depends on the notion of the interpreter, and its ability to provide a certain amount of freedom even within the constraints of genetic determination. This is not unlike an ancient theological solution, closely associated with St. Augustine and atheistically echoed in Sartre's famous formulation about being condemned to freedom. God or evolution has bestowed upon us a mechanism that frees us from the immediate demands of the environment. With that freedom we gain a moral dimension which transcends animal life. Our rational inventive nature comes with a set of ethical responsibilities.

How precisely can this work? We are still faced with the problem of making free will consistent with the idea that the brain does its work before we become aware of it. Gazzaniga cites the now famous experiments of Libet. 'Libet determined that brain potentials are firing three hundred fifty milliseconds before you have the conscious intention to act. So before you are aware that you're thinking about moving your arm, your brain is at work preparing to make that movement!' (1998, p. 73; see Libet, 1996; Libet et al., 1982). Faced with such facts, is it sufficient for ethical accounts to say that the brain not only decides and enacts in a preconscious fashion, but also inventively tricks consciousness into thinking that we consciously decide matters and that our actions are personal events? Is free will nothing more than the sense or impression of free will?

I think that this problem can be solved as long as we do not think of free will as a momentary act. Once we understand that deliberation and decision are processes that are spread out over time, even, in some cases, very short amounts of time, then there is plenty of room for conscious components that are more than accessories after the fact. Such processes depend on a very basic biological function found in all living organisms: the feedback loop. In nature, even feedback loops that are purely automatic require time, and in loops that include conscious components they require a specious or subjective present ' that is, a temporal duration that is stretched out over at least several seconds (see Humphrey, 1992). In the human being, as Gazzaniga shows, the work of the interpreter provides feedback that needs to be integrated into an extended loop.

There is some feedback that is irrelevant to the issue of free will. If components involved in a feedback process are limited to completely physical events such as nonconscious brain events, the loop is completely deterministic. Once we introduce the kinds of processes that Gazzaniga describes under the heading of the interpreter, however, conscious events come to be included in the feedback, and certain things in the environment begin to matter to the conscious agent. In terms that seem consistent with Gazzaniga's notion of the interpreter, I would suggest that when interpretation is included in temporally extended feedback loops, or in what Hacking (1995) calls 'looping effects', human events take on certain moral connotations. Narrative structures produced by the interpreter, rather than being epiphenomenal, have real effects on behaviour. To the extent that these effects enter into the feedback loops that produce further behaviour, that behaviour becomes intentional.

What we call free will cannot be conceived as something instantaneous, a knife-edge moment located between being undecided and being decided. If that were the case it would completely dissipate in the three hundred fifty milliseconds between a brain event and our conscious awareness. Free will involves temporally extended feedback or looping effects that are transformed and enhanced by the introduction of interpretational consciousness. This means that the conscious sense of free will, even if it starts out as a fiction or accessory generated by the brain, is itself a real force that counts as further input in the formation of our future action, makes that action free, and bestows responsibility on the agent.