Copyright 1991 The New York Times Company
The New York Times
November 10, 1991, Sunday, Late Edition - Final
Section 7; Page 1; Column 1; Book Review Desk
3420 words What Really Goes On in There

By George Johnson
George Johnson is an editor of The Week in Review of The New York Times and the author of "In the Palaces of Memory: How We Build the Worlds Inside Our Heads."


CONSCIOUSNESS EXPLAINED
By Daniel C. Dennett.
Illustrated. 511 pp. Boston:
Little, Brown & Company. $27.95.

Wielding his philosophical razor, William of Ockham declared, in the early 14th century, that in slicing the world into categories, thou shalt not multiply entities needlessly. He might have been pleased when, half a millennium later, James Clerk Maxwell helped tidy things up by writing the equations that show magnetism and electricity as perpendicular shadows cast by light beams, radio waves, X-rays and other forms of what we now call electromagnetic radiation. Einstein did Maxwell one better by equating mass with energy. And today the physicists promise us that once we give them their superconducting supercollider, they will take a giant step toward the day when they can unify light with gravity and the two forces at work inside the nuclei of atoms -- showing how everything, even the geometry of space and time, crystallized from the primordial flash of the big bang.

But there would still be a gaping hole in this grandmother of unification theories: an explanation of the minds that are doing the unifying. Since brains are made from the same atoms as everything else, there must be some way to unify mind and matter. The alternative would be to go against the Ockhamite tradition and, like Descartes, admit mind as a separate substance operating outside the laws of physics.

Daniel C. Dennett, the director of the Center for Cognitive Studies at Tufts University, is one of a handful of philosophers who feel this quest is so important that they have become as conversant in psychology, neuroscience and computer science as they are in philosophy. "Consciousness Explained" is his attempt, as audacious as its title, to come up with a scientific explanation for that feeling, sometimes painful, sometimes exhilarating, of being alive and aware, the object of one's own deliberations.

Ever since Emil Du Bois-Reymond demonstrated in 1843 that electricity and not some supernatural life force travels through the nervous system, scientists have tried to explain mental life biologically. It's been a long, slow haul. An important step was taken in the early 1940's when the neurologist-philosopher Warren McCulloch and the teen-age prodigy Walter Pitts showed how webs of neurons exchanging electrical signals could work like little computers, picking out patterns from the confusion buzzing at our senses. Inspired by this metaphor, neuroscientists have been making the case that memories are laid when the brain forms new connections, linking up patterns of neurons that stand for things in the outside world.

But who, or what, is reading these neurological archives? The self? The ego? The soul? For want of a theory of consciousness, it is easy to fall back on the image of a little person -- a homunculus, the philosophers call it -- who sits in the cranial control room monitoring a console of gauges and pulling the right strings. But then, of course, we're stuck with explaining the inner workings of this engineer-marionette. Does it too have a little creature inside it? If so, we fall into an infinite regress, with homunculi embedded in homunculi like an image ricocheting between mirrors.

The great success of cognitive science has been to point a way out of this fun house. As Mr. Dennett explained in an essay in his 1978 book, "Brainstorms," the reason we get the regress is that at each level we are assuming a single homunculus with powers and abilities equal to those of its host. Suppose instead that there are in the brain a horde of very stupid homunculi, each utterly dependent on the others. Make the homunculi stupid enough and it's easy to imagine that each can be replaced by a machine -- a circuit made of neurons. But from the collective behavior of all these neurological devices, consciousness emerges -- a qualitative leap no more magical than the one that occurs when wetness arises from the jostling of hydrogen and oxygen atoms.

The information processing carried out by the homuncular hordes need not be a particularly orderly affair. In the late 1950's a computer scientist from the Massachusetts Institute of Technology named Oliver Selfridge unveiled a model called Pandemonium, in which homunculi -- he called them demons -- shouted at one another like delegates in a very democratic parliament, until they reached a consensus on what was going on outside the cranial chamber. In a more recent theory, called the Society of Mind, Selfridge's colleagues Marvin Minsky and Seymour Papert call these homunculi agents. The psychologist Robert Ornstein calls them simpletons, perhaps the most appropriate name of all.

Some homunculi might be dedicated to such basic tasks as detecting horizontal and vertical lines, or identifying phonemes. Their reports would be monitored by other homunculi (shape recognizers, word recognizers) that are monitored by still other homunculi. Suppose you are watching a play. Tripped by reports from various line and shape detectors, the homunculus that recognizes bilateral symmetry might fire, and its signals (along with those of other homunculi) would activate the person detector. There is someone on stage.

But before that final flash, other parts of the brain might be entertaining rival hypotheses -- what Mr. Dennett calls multiple drafts. Spinning tops and pine trees can also appear bilaterally symmetrical. But the minority committees of homunculi considering these interpretations would be contradicted by reports from various motion detectors (trees don't move, people don't spin) and finally by the sighting of moving columns generally agreed by yet other homunculi to be arms and legs.

Considering all this hubbub, maybe it's a blessing that we are not more conscious than we are. Usually it is only the winning interpretations that we become aware of. But occasionally we get to eavesdrop on the behind-the-scenes debate. Sometimes in winter, I glance out the back window of my apartment in Brooklyn and am startled to see an old Indian woman in a shawl, like a figure from an R. C. Gorman painting, standing on the terrace of the building behind mine, huddled against the wind. It takes a second longer before a rival, more convoluted interpretation emerges: the shape is really a tree wrapped in burlap to protect it until spring. Sometimes, driving fast with the window down, you might find your word detectors, fed by your phoneme detectors, misfiring, picking voices out of the wind.

But what exactly is happening when these subliminal judgments shove their way into consciousness? As Mr. Dennett explains, if the result of all the homuncular discussion is that a winning interpretation is presented for appreciation by some central self, then we have solved nothing. We're back to the image of an intelligent, fully conscious homunculus sitting in a control room, which Mr. Dennett calls the Cartesian Theater.

His way out of this mess is to propose what he calls a Joycean machine, a kind of mental operating system (like the computer programs Windows or MS-DOS) that acts as a controller, filtering the cacophony of inner voices into a silent narrative -- a stream of consciousness. To avoid the problem of infinite regress, he hypothesizes that this master controller is not a fully cognizant marionette but a "virtual machine," created on the fly from temporary coalitions of stupid homunculi. It is because of this mental software, he proposes, that we can not only think but reflect on our own thinking, as we engage in the step-by-step deliberations that occupy us when we are most aware of the plodding of our minds.

For someone who is encountering this kind of theory for the first time, that is probably not a very convincing summary. But Mr. Dennett's argument is not easily compressible. At a time when so many nonfiction books are just horribly long magazine articles, he makes use of just about every one of his 500 pages. As he readily concedes, it is practically impossible -- for him or anyone else -- to keep from lapsing into a deeply grooved mental habit: thinking that there is some kind of ego inside us, peering out through the ocular peepholes. To break us of these assumptions, he makes his argument cumulatively, using thought experiments and anecdotes to build up his case piece by piece. For 50 pages or so, he attacks his subject from one angle, until we start to get a glimmer of what he means. Then he retreats and attacks from another angle.

Consider, for example, his story of Shakey, a robot invented in the late 1960's by Nils Nilsson and his colleagues at Stanford Research Institute, a scientific think tank in Menlo Park, Calif. Shakey is a box with motorized wheels and a television camera for eyes. Conceived in the dark ages of electronic miniaturization, Shakey had a brain that was too big to keep on board, so the robot used a radio transmitter to communicate with a central computer. Human operators would type commands on a keyboard, like "Push the box off the platform." Shakey would dutifully explore the room until it found the box. Then it would push a ramp up to the platform, roll up on top and shove the box onto the floor. The robot was able to navigate because its software was designed to recognize the signature that boxes, pyramids and other objects left on the electronic retina of its video eye. As an object came into sight, the computer would measure differences in illumination, detecting an edge here, a corner there. Referring to rules about how different objects look from different vantage points, it might decide whether it was seeing, say, the slope of a pyramid or the incline of a ramp.

Mr. Nilsson would watch these cogitations on a video monitor, as Shakey confronted a big dark blur, tracing its edges with bold white lines and finally declaring it a box. But (this is the punch line) there was no master homunculus inside Shakey watching a television screen. The monitor was purely for the benefit of the human observers; when it was unplugged, the robot worked just fine. One would look in vain for fleeting images of boxes and pyramids reverberating inside Shakey's circuitry. The robot's brain was just processing signals, the ones and zeros of binary code. It had no need of a Cartesian Theater. But it acted as though it had one.

Now let us retreat and approach the problem from a different perspective, an evolutionary one.

In the beginning, the dividing line between self and other was no more than a membrane of lipids, sugars and proteins separating the inside of a cell from the outside world. But little by little pseudopods and flagella, the unicellular precursors of arms and legs, evolved to help organisms embrace the edible and avoid being eaten. As multicellular creatures evolved, Mr. Dennett explains, they developed more complex survival mechanisms: duck when confronted with a looming object (it might be a buzzard or a rock); pay attention to vertical symmetry (it might be another creature looking at you, in which case you could draw on detectors that distinguish between predator, prey and potential mate). Mr. Dennett speculates that these survival mechanisms are the precursors of the mental homunculi. Over eons, animals acquired an evolutionary grab bag of these self-perpetuating tricks, which allowed them not only to monitor the environment passively but to explore, hungering for the information that increased their odds of survival as surely as a good piece of meat.

At first many of the neural devices were discrete, Mr. Dennett speculates, unconnected to one another. But slowly they began to develop communication lines. Imagine the first primitive people, just dimly conscious, learning to use language to milk their fellow humans for information: "Is there food in that cave or a jaguar?" Then one day, someone might have asked for information when there was no one else around: "Now let me see, where was it that I left that chisel?" And, lo and behold, another part of his brain answered. A loop was closed in which the vocal cords, the vibration of the air and the eardrums were used as a pathway to connect one part of the brain with another. A virtual wire was formed. Eventually this signaling became silent -- the voices were in the head.

To make sense of all the mental racket -- the shouting of the homunculi -- the Joycean machine developed and assumed the task of deciding what to think about next. As Mr. Dennett sees it, this is a good part of what consciousness is. Riding on top of the neural machinery -- the hardware of the brain -- is a program that simulates a serial computer, creating a step-by-step narrative from the tumult unfolding in the world and in the head.

The Joycean software is not inborn like, for example, the looming-object detector. It is an accretion of learned behaviors, habits of mind, developed for recruiting teams of homunculi to deal with the long deliberative processes that the brain's wetware alone is not well equipped to handle -- planning a trip to Europe, dividing up a restaurant check, reliving an embarrassing encounter and deciding what you should have said.

If at this point you're still not quite in the swing of Mr. Dennett's theory, you can be sure he will keep retreating and attacking and retreating and attacking, circling in on his prey.

At first I was a little disappointed when I realized that what I was reading was not so much a brand-new theory of consciousness as a synthesis and sharpening of ideas that have been around awhile -- Mr. Minsky and Mr. Papert's Society of Mind model, Julian Jaynes's theory of inner voices described in "The Origins of Consciousness in the Breakdown of the Bicameral Mind." But in illuminating these ideas and relentlessly putting them to the test, Mr. Dennett's exposition is nothing short of brilliant, the best example I've seen of a science book aimed at both professionals and general readers.

Scientists who look down on colleagues who write popular accounts can rest assured that Mr. Dennett is never less than methodical, thorough, fair in his attributions -- you can almost feel those other philosophers and scientists reading over his shoulder as he types. It's a wonder, then, that he managed to write a book that is also so clear and funny, with introspective flights of fancy worthy of Nicholson Baker. How do you know, Mr. Dennett muses, that everyone in the world but you isn't a zombie? Or that you are not just a brain in a vat, hooked up to a simulation you think is life? It has been a long time since I have felt so engaged by a book.

For all its clarity and style, "Consciousness Explained" is not easy reading. Mr. Dennett probably should have put his methodology (something called heterophenomenology) in one of his appendices (he has one for scientists and one for philosophers). And parts of his argument will be difficult for people who haven't read some of the popular accounts of artificial intelligence and cognitive science. But this book is so good that it's worth studying up for.

In his best seller, "The Emperor's New Mind," the Oxford mathematician Roger Penrose dismissed in a few pages the possibility that consciousness can be explained by thinking of the brain as a kind of computer. If there is any justice, everyone who bought a copy of Mr. Penrose's far more difficult book will buy a copy of Mr. Dennett's and marvel at how, in the hands of a master explicator, the richness and power of the computer metaphor of the mind comes shining through.