Sorry, But Your Soul Just Died

From The Guardian, Feb. 21, 2016:

Did Tom Wolfe’s bold predictions about human nature come true?

Twenty years ago, Tom Wolfe made predictions about how advances in neuroscience would transform our understanding of human behaviour. So, how much did he get right?

Exactly 20 years ago, Tom Wolfe wrote one of the most influential articles in neuroscience. Titled Sorry, But Your Soul Just Died, the 1996 article explores how ideas from brain science were beginning to transform our understanding of human nature and extend the horizons of our scientific imagination. It was published in a mainstream magazine, written by an outsider, and seemed to throw open the doors to an exhilarating revolution in science and self-understanding. Looking at the state of neuroscience and society two decades later, Wolfe turned out to be an insightful but uneven prophet to the brain’s future.

In some ways, the article was a surprising turn for the American writer. Since his early work, the best known being The Electric Kool-Aid Acid Test, he was a keen observer of human behaviour and how people jockey for status among their peers, but he tended to focus on unusual or exceptional subcultures. Hippies, upper-class radicals, car freaks and astronauts all featured in his searching and, occasionally, eviscerating reporting. An interest in neuroscientists – brain geeks – must have seemed like an enthusiasm for paint salesmen to much of the mid-90s public but Wolfe saw a genuine cultural subversion emerging from the field.

Not all of his predictions hit the mark; some now seem quaint or even ridiculous. He describes Richard Dawkins as “earnestly, feverishly, politically correct”, allowing us a nostalgic look back to a gentler time, before Twitter revealed that Dawkins’s inner monologue is like listening to Donald Trump on a day out to the mosque. More scientifically, Wolfe’s assertion that brain scanning would have a greater impact on everyday life than the internet is one he has had to recant in many subsequent interviews. Other predictions seem to have been swayed by his conservative politics. In one particularly odd section he talks about an “IQ cap”, which could apparently test your intelligence just by measuring brain waves. Wolfe claims the technology was developed but suppressed because “nobody wanted to believe that human brainpower is … hardwired”. In reality, the technology just didn’t work. Measuring complex human abilities from simple features of brain function has long since been abandoned as a non-starter.

But Wolfe’s political biases may have served him well when considering one of the most contentious debates of the day: the role of biology in understanding violence. He mentions the Violence Initiative, a US government project to study the genetics of violent behaviour in inner cities. Already controversial, it was abandoned after the lead researcher gave a jaw-dropping speech that referred to the evolutionary basis of violence in monkeys and compared inner cities to a “jungle”. Wolfe rightly described this as being “the stupidest single word uttered by an American public official in 1992”. The racist overtones compounded legitimate concerns about the project but it strengthened a long-held liberal suspicion that any research on the biology of violence was eugenics in disguise, something Wolfe thought absurd. Twenty years later, he has largely been proved right and the neuroscience of violence is now relatively uncontroversial, a matter of debate not protest. The pendulum swung in the other direction for a while, with overblown claims about a “warrior gene”. Now, biological factors are accepted as present but so complex that attempts to make political capital out of them quickly stumble. These days, anyone using simple reductionist biology as either an axe or a foil marks them out as scientifically naive.

A philosopher writes me:

I had a few thoughts about the Tom Wolfe essay on neuroscience. At one point, Wolfe says that scientists now regard consciousness as “an illusion”. In other words, they think there is no such thing as consciousness, although it seems to us that there is consciousness. (Isn’t that what it would mean to call it “an illusion”?) This is one of those cases where philosophy turns out to be–surprisingly–important and useful. The claim that consciousness is an illusion is simply incoherent, self-defeating or self-contradictory. In order for an illusion (of any kind) to exist there must at least be consciousness. An illusion involves an inconsistency between how things seem (to someone) and how things really are. So, for example, it could be an illusion that the physical world exists. Maybe all that exists in reality are minds and their experiences, which make it seem to us–incorrectly–that there are physical things in addition to minds. But it couldn’t be an illusion that consciousness exists, because any illusion necessarily involves consciousness. There can’t be any way that things seem (to me or you or anyone) unless there are conscious states or experiences. The way things seem to me just is, by its very nature, a fact about my consciousness. If there is no consciousness at all–if consciousness really is an illusion–then there is no difference between how things seem to me and how things really are. But then there is no illusion either.

As often happens, modern people seem so fascinated by scientific theorizing that they forget to apply basic critical thinking. We are good at measuring and predicting and assembling these vast reams of data, but we’re primitives when it comes to the very different task of interpreting or conceptualizing or reflecting on these facts and predictions.

A bit later, Wolfe (or someone) claims that science has revealed consciousness to be merely “a physical product” of something in the brain. But this is inconsistent with the earlier claim that it’s an “illusion”. If it’s an illusion, it’s not real; if it’s a physical thing of some kind, I guess it must be real. But never mind. Suppose that consciousness is a “physical product” of brain activity. I think we don’t even understand how that could be true. It’s much like saying that the number 3 is made of cheese, or that the Sears Tower is in the key of F Minor. Some kind of category mistake. Consciousness has certain properties that don’t seem like they could be properties of any physical thing. Consider truth, for example. When I have a thought, it may be true or it may be false. Does it really make sense to anyone to say that lump of meat in your skull is true, or that any some electro-chemical process is true? Likewise conscious states of mind can be ‘about’ things. They are ‘intentional’, philosophers say. Right now I’m thinking of Tom Wolfe. How could it make sense to say that a piece of meat or some other “physical product” of my brain’s activity is about Tom Wolfe? At least, this is a very mysterious idea with no real explanation in science. Similarly, conscious states have what philosophers call “phenomenal character”. Roughly, there’s something it’s like to be conscious. When I see a red tomato, there is the experiential redness–the way it looks. No one has ever been able to explain how that can be identified with something physical. For instance, doesn’t it seem you could know everything about light waves and other purely physical aspects of color without knowing what red looks like to me? For all these reasons, and others, it’s hard to understand the idea that consciousness is a physical thing. So the old Cartesian idea of mind-body dualism is not a “quaint” theory as Wolfe puts it. On the contrary, it has always been a serious and rationally defensible position. Nothing that’s been discovered in science makes any difference.

On a somewhat different topic, I’ve been thinking about your idea that religion is based on a subjective leap of faith and therefore it makes no sense to speak of truth (or objective truth) in matters of religion. I don’t agree. For one thing, it is probably true that religious belief depends on some kind of leap of faith, but that doesn’t mean the resulting world-view can’t be objectively true or objectively false. You’re mixing up a question about how we justify our beliefs with a different question about their truth-values. To say that it’s based on a leap of faith just means (I think) that there is no evidence for us that makes any religion definitely more rational to believe than some other religion, or no religion. In other words, the evidence is inconclusive. If you want to believe, you can find some evidence for belief, but it won’t be decisive; in the end you just have to choose. But consider a similar case. I have no evidence that rationally settles the question of whether the number of rocks on the moon right now is even. But suppose I just arbitrarily choose to believe that the number is indeed even. Well, that might happen to be true, or not, depending on how many rocks are on the moon at the moment. So the fact that I made a leap of faith doesn’t preclude the possibility that my belief is objectively true. Another point to consider is that science (and everything else) depends on some similar leap of faith, though it’s not usually identified as such. In his essay “The Will To Believe” William James claims that science depends on the unverifiable assumptions that (a) objective truth exists and (b) the human mind is able to know at least some objective truths. Science wouldn’t be a reliable method unless both assumptions were true, but neither one can be scientifically verified–to try to use science to justify either assumption would be going in a circle, since we’d be relying on those very assumptions in trying to verify them. I’d say that all human thinking ultimately depends on beliefs that we accept because they just seem so obvious and compelling to us. Religion is no different from science or philosophy or any other kind of human thinking in that respect. In philosophy or political inquiry we rely on logic. What is the evidence for the basic rules of logic? Why do we think it’s safe to assume that a contradiction can’t be true–it can’t be true that Trump is President and is not President at the same time? Well, it just seems utterly obvious, and we find it hard to imagine how reality could be anti-logical. But there’s no way to verify the rules of logic. To verify anything we’d need to rely on logic, so the reasoning would be circular.

Hi Luke,
I don’t understand your view of motivation. I would think my motivation for doing something is simply my reason or some set of reasons for whatever I’m doing. My reasons for going to work are thing like earning money, exercising some abilities I’ve learned, seeing friends, etc. It seems obvious to me that I often know at least some of my reasons for acting, i.e., my motivations. I allow that I can’t know all if them all the time, and that some might be subconscious. But why would you say categorically that we can’t know our motivations?

I also don’t understand how you could explain behavior in terms of incentives without assuming that you know at least some motivational of others. If people tend to obey the law because they are responding to incentives (and disincentives) that is surely because they have certain motivations.
For example, people want to stay out of prison, and that motivates them to obey the law; the want or motivation is what explains the fact that law abiding behavior is incentivized and law breaking behavior is disincentivized for them. Men watch porn because they have sexual or psychological motivations. And we can often learn a fair bit about what those motivations are. We can listen to what they say, or introspect honestly. No? What am I missing here?

From neuroscience to Nietzsche. A sobering look at how man may perceive himself in the future, particularly as ideas about genetic predeterminism takes the place of dying Darwinism. This article was first published in “Forbes ASAP” in 1996.

By Tom Wolfe

Being a bit behind the curve, I had only just heard of the digital revolution last February when Louis Rossetto, cofounder of Wired magazine, wearing a shirt with no collar and his hair as long as Felix Mendelssohn’s, looking every inch the young California visionary, gave a speech before the Cato Institute announcing the dawn of the twenty–first century’s digital civilization. As his text, he chose the maverick Jesuit scientist and philosopher Pierre Teilhard de Chardin, who fifty years ago prophesied that radio, television, and computers would create a “noosphere,” an electronic membrane covering the earth and wiring all humanity together in a single nervous system. Geographic locations, national boundaries, the old notions of markets and political processes—all would become irrelevant. With the Internet spreading over the globe at an astonishing pace, said Rossetto, that marvelous modem–driven moment is almost at hand.

Could be. But something tells me that within ten years, by 2006, the entire digital universe is going to seem like pretty mundane stuff compared to a new technology that right now is but a mere glow radiating from a tiny number of American and Cuban (yes, Cuban) hospitals and laboratories. It is called brain imaging, and anyone who cares to get up early and catch a truly blinding twenty–first–century dawn will want to keep an eye on it.

Brain imaging refers to techniques for watching the human brain as it functions, in real time. The most advanced forms currently are three–dimensional electroencephalography using mathematical models; the more familiar PET scan (positron–emission tomography); the new fMRI (functional magnetic resonance imaging), which shows brain blood–flow patterns, and MRS (magnetic resonance spectroscopy), which measures biochemical changes in the brain; and the even newer PET reporter gene/PET reporter probe, which is, in fact, so new that it still has that length of heavy lumber for a name. Used so far only in animals and a few desperately sick children, the PET reporter gene/PET reporter probe pinpoints and follows the activity of specific genes. On a scanner screen you can actually see the genes light up inside the brain.

By 1996 standards, these are sophisticated devices. Ten years from now, however, they may seem primitive compared to the stunning new windows into the brain that will have been developed.

Brain imaging was invented for medical diagnosis. But its far greater importance is that it may very well confirm, in ways too precise to be disputed, certain theories about “the mind,” “the self,” “the soul,” and “free will” that are already devoutly believed in by scholars in what is now the hottest field in the academic world, neuroscience. Granted, all those skeptical quotation marks are enough to put anybody on the qui vive right away, but Ultimate Skepticism is part of the brilliance of the dawn I have promised.

Neuroscience, the science of the brain and the central nervous system, is on the threshold of a unified theory that will have an impact as powerful as that of Darwinism a hundred years ago. Already there is a new Darwin, or perhaps I should say an updated Darwin, since no one ever believed more religiously in Darwin I than he does. His name is Edward O. Wilson. He teaches zoology at Harvard, and he is the author of two books of extraordinary influence, The Insect Societies and Sociobiology: The New Synthesis. Not “A” new synthesis but “The” new synthesis; in terms of his stature in neuroscience, it is not a mere boast.

Wilson has created and named the new field of sociobiology, and he has compressed its underlying premise into a single sentence. Every human brain, he says, is born not as a blank tablet (a tabula rasa) waiting to be filled in by experience but as “an exposed negative waiting to be slipped into developer fluid.” You can develop the negative well or you can develop it poorly, but either way you are going to get precious little that is not already imprinted on the film. The print is the individual’s genetic history, over thousands of years of evolution, and there is not much anybody can do about it. Furthermore, says Wilson, genetics determine not only things such as temperament, role preferences, emotional responses, and levels of aggression, but also many of our most revered moral choices, which are not choices at all in any free–will sense but tendencies imprinted in the hypothalamus and limbic regions of the brain, a concept expanded upon in 1993 in a much–talked–about book, The Moral Sense, by James Q. Wilson (no kin to Edward O.).

The neuroscientific view of life
This, the neuroscientific view of life, has become the strategic high ground in the academic world, and the battle for it has already spread well beyond the scientific disciplines and, for that matter, out into the general public. Both liberals and conservatives without a scientific bone in their bodies are busy trying to seize the terrain. The gay rights movement, for example, has fastened onto a study published in July of 1993 by the highly respected Dean Hamer of the National Institutes of Health, announcing the discovery of “the gay gene.” Obviously, if homosexuality is a genetically determined trait, like left–handedness or hazel eyes, then laws and sanctions against it are attempts to legislate against Nature. Conservatives, meantime, have fastened upon studies indicating that men’s and women’s brains are wired so differently, thanks to the long haul of evolution, that feminist attempts to open up traditionally male roles to women are the same thing: a doomed violation of Nature.

Wilson himself has wound up in deep water on this score; or cold water, if one need edit. In his personal life Wilson is a conventional liberal, PC, as the saying goes—he is, after all, a member of the Harvard faculty—concerned about environmental issues and all the usual things. But he has said that “forcing similar role identities” on both men and women “flies in the face of thousands of years in which mammals demonstrated a strong tendency for sexual division of labor. Since this division of labor is persistent from hunter–gatherer through agricultural and industrial societies, it suggests a genetic origin. We do not know when this trait evolved in human evolution or how resistant it is to the continuing and justified pressures for human rights.”

“Resistant” was Darwin II, the neuroscientist, speaking. “Justified” was the PC Harvard liberal. He was not PC or liberal enough. Feminist protesters invaded a conference where Wilson was appearing, dumped a pitcher of ice water, cubes and all, over his head, and began chanting, “You’re all wet! You’re all wet!” The most prominent feminist in America, Gloria Steinem, went on television and, in an interview with John Stossel of ABC, insisted that studies of genetic differences between male and female nervous systems should cease forthwith.

But that turned out to be mild stuff in the current political panic over neuroscience. In February of 1992, Frederick K. Goodwin, a renowned psychiatrist, head of the federal Alcohol, Drug Abuse, and Mental Health Administration, and a certified yokel in the field of public relations, made the mistake of describing, at a public meeting in Washington, the National Institute of Mental Health’s ten–year–old Violence Initiative. This was an experimental program whose hypothesis was that, as among monkeys in the jungle—Goodwin was noted for his monkey studies—much of the criminal mayhem in the United States was caused by a relatively few young males who were genetically predisposed to it; who were hardwired for violent crime, in short. Out in the jungle, among mankind’s closest animal relatives, the chimpanzees, it seemed that a handful of genetically twisted young males were the ones who committed practically all of the wanton murders of other males and the physical abuse of females. What if the same were true among human beings? What if, in any given community, it turned out to be a handful of young males with toxic DNA who were pushing statistics for violent crime up to such high levels? The Violence Initiative envisioned identifying these individuals in childhood, somehow, some way, someday, and treating them therapeutically with drugs. The notion that crime–ridden urban America was a “jungle,” said Goodwin, was perhaps more than just a tired old metaphor.

That did it. That may have been the stupidest single word uttered by an American public official in the year 1992. The outcry was immediate. Senator Edward Kennedy of Massachusetts and Representative John Dingell of Michigan (who, it became obvious later, suffered from hydrophobia when it came to science projects) not only condemned Goodwin’s remarks as racist but also delivered their scientific verdict: Research among primates “is a preposterous basis” for analyzing anything as complex as “the crime and violence that plagues our country today.” (This came as surprising news to NASA scientists who had first trained and sent a chimpanzee called Ham up on top of a Redstone rocket into suborbital space flight and then trained and sent another one, called Enos, which is Greek for “man,” up on an Atlas rocket and around the earth in orbital space flight and had thereby accurately and completely predicted the physical, psychological, and task–motor responses of the human astronauts, Alan Shepard and John Glenn, who repeated the chimpanzees’ flights and tasks months later.) The Violence Initiative was compared to Nazi eugenic proposals for the extermination of undesirables. Dingell’s Michigan colleague, Representative John Conyers, then chairman of the Government Operations Committee and senior member of the Congressional Black Caucus, demanded Goodwin’s resignation—and got it two days later, whereupon the government, with the Department of Health and Human Services now doing the talking, denied that the Violence Initiative had ever existed. It disappeared down the memory hole, to use Orwell’s term.

A conference of criminologists and other academics interested in the neuroscientific studies done so far for the Violence Initiative—a conference underwritten in part by a grant from the National Institutes of Health—had been scheduled for May of 1993 at the University of Maryland. Down went the conference, too; the NIH drowned it like a kitten. Last year, a University of Maryland legal scholar named David Wasserman tried to reassemble the troops on the QT, as it were, in a hall all but hidden from human purview in a hamlet called Queenstown in the foggy, boggy boondocks of Queen Annes County on Maryland’s Eastern Shore. The NIH, proving it was a hard learner, quietly provided $133,000 for the event but only after Wasserman promised to fireproof the proceedings by also inviting scholars who rejected the notion of a possible genetic genesis of crime and scheduling a cold–shower session dwelling on the evils of the eugenics movement of the early twentieth century. No use, boys! An army of protesters found the poor cringing devils anyway and stormed into the auditorium chanting, “Maryland conference, you can’t hide—we know you’re pushing genocide!” It took two hours for them to get bored enough to leave, and the conference ended in a complete muddle with the specially recruited fireproofing PC faction issuing a statement that said: “Scientists as well as historians and sociologists must not allow themselves to provide academic respectability for racist pseudoscience.” Today, at the NIH, the term Violence Initiative is a synonym for taboo. The present moment resembles that moment in the Middle Ages when the Catholic Church forbade the dissection of human bodies, for fear that what was discovered inside might cast doubt on the Christian doctrine that God created man in his own image.

Even more radio–active is the matter of intelligence, as measured by IQ tests. Privately—not many care to speak out—the vast majority of neuroscientists believe the genetic component of an individual’s intelligence is remarkably high. Your intelligence can be improved upon, by skilled and devoted mentors, or it can be held back by a poor upbringing—i.e., the negative can be well developed or poorly developed—but your genes are what really make the difference. The recent ruckus over Charles Murray and Richard Herrnstein’s The Bell Curve is probably just the beginning of the bitterness the subject is going to create.

Not long ago, according to two neuroscientists I interviewed, a firm called Neurometrics sought out investors and tried to market an amazing but simple invention known as the IQ Cap. The idea was to provide a way of testing intelligence that would be free of “cultural bias,” one that would not force anyone to deal with words or concepts that might be familiar to people from one culture but not to people from another. The IQ Cap recorded only brain waves; and a computer, not a potentially biased human test–giver, analyzed the results. It was based on the work of neuroscientists such as E. Roy John 1, who is now one of the major pioneers of electroencephalographic brain imaging; Duilio Giannitrapani, author of The Electrophysiology of Intellectual Functions; and David Robinson, author of The Wechsler Adult Intelligence Scale and Personality Assessment: Toward a Biologically Based Theory of Intelligence and Cognition and many other monographs famous among neuroscientists. I spoke to one researcher who had devised an IQ Cap himself by replicating an experiment described by Giannitrapani in The Electrophysiology of Intellectual Functions. It was not a complicated process. You attached sixteen electrodes to the scalp of the person you wanted to test. You had to muss up his hair a little, but you didn’t have to cut it, much less shave it. Then you had him stare at a marker on a blank wall. This particular researcher used a raspberry–red thumbtack. Then you pushed a toggle switch. In sixteen seconds the Cap’s computer box gave you an accurate prediction (within one–half of a standard deviation) of what the subject would score on all eleven subtests of the Wechsler Adult Intelligence Scale or, in the case of children, the Wechsler Intelligence Scale for Children—all from sixteen seconds’ worth of brain waves. There was nothing culturally biased about the test whatsoever. What could be cultural about staring at a thumbtack on a wall? The savings in time and money were breathtaking. The conventional IQ test took two hours to complete; and the overhead, in terms of paying test–givers, test–scorers, test–preparers, and the rent, was $100 an hour at the very least. The IQ Cap required about fifteen minutes and sixteen seconds—it took about fifteen minutes to put the electrodes on the scalp—and about a tenth of a penny’s worth of electricity. Neurometrics’s investors were rubbing their hands and licking their chops. They were about to make a killing.

In fact—nobody wanted their damnable IQ Cap!

It wasn’t simply that no one believed you could derive IQ scores from brainwaves—it was that nobody wanted to believe it could be done. Nobody wanted to believe that human brainpower is…that hardwired. Nobody wanted to learn in a flash that…the genetic fix is in. Nobody wanted to learn that he was…a hardwired genetic mediocrity…and that the best he could hope for in this Trough of Mortal Error was to live out his mediocre life as a stress–free dim bulb. Barry Sterman of UCLA, chief scientist for a firm called Cognitive Neurometrics, who has devised his own brain–wave technology for market research and focus groups, regards brain–wave IQ testing as possible—but in the current atmosphere you “wouldn’t have a Chinaman’s chance of getting a grant” to develop it.

Science is a court from which there is no appeal
Here we begin to sense the chill that emanates from the hottest field in the academic world. The unspoken and largely unconscious premise of the wrangling over neuroscience’s strategic high ground is: We now live in an age in which science is a court from which there is no appeal. And the issue this time around, at the end of the twentieth century, is not the evolution of the species, which can seem a remote business, but the nature of our own precious inner selves.

The elders of the field, such as Wilson, are well aware of all this and are cautious, or cautious compared to the new generation. Wilson still holds out the possibility—I think he doubts it, but he still holds out the possibility—that at some point in evolutionary history, culture began to influence the development of the human brain in ways that cannot be explained by strict Darwinian theory. But the new generation of neuroscientists are not cautious for a second. In private conversations, the bull sessions, as it were, that create the mental atmosphere of any hot new science—and I love talking to these people—they express an uncompromising determinism.

They start with the most famous statement in all of modern philosophy, Descartes’s “Cogito ergo sum,” “I think, therefore I am,” which they regard as the essence of “dualism,” the old–fashioned notion that the mind is something distinct from its mechanism, the brain and the body. (I will get to the second most famous statement in a moment.) This is also known as the “ghost in the machine” fallacy, the quaint belief that there is a ghostly “self” somewhere inside the brain that interprets and directs its operations. Neuroscientists involved in three–dimensional electroencephalography will tell you that there is not even any one place in the brain where consciousness or self–consciousness (Cogito ergo sum) is located. This is merely an illusion created by a medley of neurological systems acting in concert. The young generation takes this yet one step further. Since consciousness and thought are entirely physical products of your brain and nervous system—and since your brain arrived fully imprinted at birth—what makes you think you have free will? Where is it going to come from? What “ghost,” what “mind,” what “self,” what “soul,” what anything that will not be immediately grabbed by those scornful quotation marks, is going to bubble up your brain stem to give it to you? I have heard neuroscientists theorize that, given computers of sufficient power and sophistication, it would be possible to predict the course of any human being’s life moment by moment, including the fact that the poor devil was about to shake his head over the very idea. I doubt that any Calvinist of the sixteenth century ever believed so completely in predestination as these, the hottest and most intensely rational young scientists in the United States at the end of the twentieth.

Since the late 1970s, in the Age of Wilson, college students have been heading into neuroscience in job lots. The Society for Neuroscience was founded in 1970 with 1,100 members. Today, one generation later, its membership exceeds 26,000. The Society’s latest convention, in San Diego, drew 23,052 souls, making it one of the biggest professional conventions in the country. In the venerable field of academic philosophy, young faculty members are jumping ship in embarrassing numbers and shifting into neuroscience. They are heading for the laboratories. Why wrestle with Kant’s God, Freedom, and Immortality when it is only a matter of time before neuroscience, probably through brain imaging, reveals the actual physical mechanism that sends these mental constructs, these illusions, synapsing up into the Broca’s and Wernicke’s areas of the brain?

Which brings us to the second most famous statement in all of modern philosophy: Nietzsche’s “God is dead.” The year was 1882. (The book was Die Fr�hliche Wissenschaft [The Gay Science].) Nietzsche said this was not a declaration of atheism, although he was in fact an atheist, but simply the news of an event. He called the death of God a “tremendous event,” the greatest event of modern history. The news was that educated people no longer believed in God, as a result of the rise of rationalism and scientific thought, including Darwinism, over the preceding 250 years. But before you atheists run up your flags of triumph, he said, think of the implications. “The story I have to tell,” wrote Nietzsche, “is the history of the next two centuries.” He predicted (in Ecce Homo) that the twentieth century would be a century of “wars such as have never happened on earth,” wars catastrophic beyond all imagining. And why? Because human beings would no longer have a god to turn to, to absolve them of their guilt; but they would still be racked by guilt, since guilt is an impulse instilled in children when they are very young, before the age of reason. As a result, people would loathe not only one another but themselves. The blind and reassuring faith they formerly poured into their belief in God, said Nietzsche, they would now pour into a belief in barbaric nationalistic brotherhoods: “If the doctrines…of the lack of any cardinal distinction between man and animal, doctrines I consider true but deadly”—he says in an allusion to Darwinism in Untimely Meditations—”are hurled into the people for another generation…then nobody should be surprised when…brotherhoods with the aim of the robbery and exploitation of the non–brothers…will appear in the arena of the future.”

Nietzsche’s view of guilt, incidentally, is also that of neuro–scientists a century later. They regard guilt as one of those tendencies imprinted in the brain at birth. In some people the genetic work is not complete, and they engage in criminal behavior without a twinge of remorse—thereby intriguing criminologists, who then want to create Violence Initiatives and hold conferences on the subject.

Nietzsche said that mankind would limp on through the twentieth century “on the mere pittance” of the old decaying God–based moral codes. But then, in the twenty–first, would come a period more dreadful than the great wars, a time of “the total eclipse of all values” (in The Will to Power). This would also be a frantic period of “revaluation,” in which people would try to find new systems of values to replace the osteoporotic skeletons of the old. But you will fail, he warned, because you cannot believe in moral codes without simultaneously believing in a god who points at you with his fearsome forefinger and says “Thou shalt” or “Thou shalt not.”

Why should we bother ourselves with a dire prediction that seems so far–fetched as “the total eclipse of all values”? Because of man’s track record, I should think. After all, in Europe, in the peaceful decade of the 1880s, it must have seemed even more far–fetched to predict the world wars of the twentieth century and the barbaric brotherhoods of Nazism and Communism. Ecce vates! Ecce vates! Behold the prophet! How much more proof can one demand of a man’s powers of prediction?

A hundred years ago those who worried about the death of God could console one another with the fact that they still had their own bright selves and their own inviolable souls for moral ballast and the marvels of modern science to chart the way. But what if, as seems likely, the greatest marvel of modern science turns out to be brain imaging? And what if, ten years from now, brain imaging has proved, beyond any doubt, that not only Edward O. Wilson but also the young generation are, in fact, correct?

The elders, such as Wilson himself and Daniel C. Dennett, the author of Darwin’s Dangerous Idea: Evolution and the Meanings of Life, and Richard Dawkins, author of The Selfish Gene and The Blind Watchmaker, insist that there is nothing to fear from the truth, from the ultimate extension of Darwin’s dangerous idea. They present elegant arguments as to why neuroscience should in no way diminish the richness of life, the magic of art, or the righteousness of political causes, including, if one need edit, political correctness at Harvard or Tufts, where Dennett is Director of the Center for Cognitive Studies, or Oxford, where Dawkins is something called Professor of Public Understanding of Science. (Dennett and Dawkins, every bit as much as Wilson, are earnestly, feverishly, politically correct.) Despite their best efforts, however, neuroscience is not rippling out into the public on waves of scholarly reassurance. But rippling out it is, rapidly. The conclusion people out beyond the laboratory walls are drawing is: The fix is in! We’re all hardwired! That, and: Don’t blame me! I’m wired wrong!

From nurture to nature
This sudden switch from a belief in Nurture, in the form of social conditioning, to Nature, in the form of genetics and brain physiology, is the great intellectual event, to borrow Nietzsche’s term, of the late twentieth century. Up to now the two most influential ideas of the century have been Marxism and Freudianism. Both were founded upon the premise that human beings and their “ideals”—Marx and Freud knew about quotation marks, too—are completely molded by their environment. To Marx, the crucial environment was one’s social class; “ideals” and “faiths” were notions foisted by the upper orders upon the lower as instruments of social control. To Freud, the crucial environment was the Oedipal drama, the unconscious sexual plot that was played out in the family early in a child’s existence. The “ideals” and “faiths” you prize so much are merely the parlor furniture you feature for receiving your guests, said Freud; I will show you the cellar, the furnace, the pipes, the sexual steam that actually runs the house. By the mid–1950s even anti–Marxists and anti–Freudians had come to assume the centrality of class domination and Oedipally conditioned sexual drives. On top of this came Pavlov, with his “stimulus–response bonds,” and B. F. Skinner, with his “operant conditioning,” turning the supremacy of conditioning into something approaching a precise form of engineering.

So how did this brilliant intellectual fashion come to so screeching and ignominious an end?

The demise of Freudianism can be summed up in a single word: lithium. In 1949 an Australian psychiatrist, John Cade, gave five days of lithium therapy—for entirely the wrong reasons—to a fifty–one–year–old mental patient who was so manic–depressive, so hyperactive, unintelligible, and uncontrollable, he had been kept locked up in asylums for twenty years. By the sixth day, thanks to the lithium buildup in his blood, he was a normal human being. Three months later he was released and lived happily ever after in his own home. This was a man who had been locked up and subjected to two decades of Freudian logorrhea to no avail whatsoever. Over the next twenty years antidepressant and tranquilizing drugs completely replaced Freudian talk–talk as treatment for serious mental disturbances. By the mid–1980s, neuroscientists looked upon Freudian psychiatry as a quaint relic based largely upon superstition (such as dream analysis — dream analysis!), like phrenology or mesmerism. In fact, among neuroscientists, phrenology now has a higher reputation than Freudian psychiatry, since phrenology was in a certain crude way a precursor of electroencephalography. Freudian psychiatrists are now regarded as old crocks with sham medical degrees, as ears with wire hairs sprouting out of them that people with more money than sense can hire to talk into.

Marxism was finished off even more suddenly—in a single year, 1973—with the smuggling out of the Soviet Union and the publication in France of the first of the three volumes of Aleksandr Solzhenitsyn’s The Gulag Archipelago. Other writers, notably the British historian Robert Conquest, had already exposed the Soviet Union’s vast network of concentration camps, but their work was based largely on the testimony of refugees, and refugees were routinely discounted as biased and bitter observers. Solzhenitsyn, on the other hand, was a Soviet citizen, still living on Soviet soil, a zek himself for eleven years, zek being Russian slang for concentration camp prisoner. His credibility had been vouched for by none other than Nikita Khrushchev, who in 1962 had permitted the publication of Solzhenitsyn’s novella of the gulag, One Day in the Life of Ivan Denisovich, as a means of cutting down to size the daunting shadow of his predecessor Stalin. “Yes,” Khrushchev had said in effect, “what this man Solzhenitsyn has to say is true. Such were Stalin’s crimes.” Solzhenitsyn’s brief fictional description of the Soviet slave labor system was damaging enough. But The Gulag Archipelago, a two–thousand–page, densely detailed, nonfiction account of the Soviet Communist Party’s systematic extermination of its enemies, real and imagined, of its own countrymen, by the tens of millions through an enormous, methodical, bureaucratically controlled “human sewage disposal system,” as Solzhenitsyn called it— The Gulag Archipelago was devastating. After all, this was a century in which there was no longer any possible ideological detour around the concentration camp. Among European intellectuals, even French intellectuals, Marxism collapsed as a spiritual force immediately. Ironically, it survived longer in the United States before suffering a final, merciful coup de grace on November 9, 1989, with the breaching of the Berlin Wall, which signaled in an unmistakable fashion what a debacle the Soviets’ seventy–two–year field experiment in socialism had been. (Marxism still hangs on, barely, acrobatically, in American universities in a Mannerist form known as Deconstruction, a literary doctrine that depicts language itself as an insidious tool used by The Powers That Be to deceive the proles and peasants.)

Freudianism and Marxism—and with them, the entire belief in social conditioning—were demolished so swiftly, so suddenly, that neuroscience has surged in, as if into an intellectual vacuum. Nor do you have to be a scientist to detect the rush.

Anyone with a child in school knows the signs all too well. I have children in school, and I am intrigued by the faith parents now invest—the craze began about 1990—in psychologists who diagnose their children as suffering from a defect known as attention deficit disorder, or ADD. Of course, I have no way of knowing whether this “disorder” is an actual, physical, neurological condition or not, but neither does anybody else in this early stage of neuroscience. The symptoms of this supposed malady are always the same. The child, or, rather, the boy—forty–nine out of fifty cases are boys—fidgets around in school, slides off his chair, doesn’t pay attention, distracts his classmates during class, and performs poorly. In an earlier era he would have been pressured to pay attention, work harder, show some self–discipline. To parents caught up in the new intellectual climate of the 1990s, that approach seems cruel, because my little boy’s problem is…he’s wired wrong! The poor little tyke —the fix has been in since birth! Invariably the parents complain, “All he wants to do is sit in front of the television set and watch cartoons and play Sega Genesis.” For how long? “How long? For hours at a time.” Hours at a time; as even any young neuroscientist will tell you, that boy may have a problem, but it is not an attention deficit.

Nevertheless, all across America we have the spectacle of an entire generation of little boys, by the tens of thousands, being dosed up on ADD’s magic bullet of choice, Ritalin, the CIBA–Geneva Corporation’s brand name for the stimulant methylphenidate. I first encountered Ritalin in 1966 when I was in San Francisco doing research for a book on the psychedelic or hippie movement. A certain species of the genus hippie was known as the Speed Freak, and a certain strain of Speed Freak was known as the Ritalin Head. The Ritalin Heads loved Ritalin. You’d see them in the throes of absolute Ritalin raptures…Not a wiggle, not a peep…They would sit engrossed in anything at all…a manhole cover, their own palm wrinkles…indefinitely…through shoulda–been mealtime after mealtime…through raging insomnias…Pure methyl–phenidate nirvana…From 1990 to 1995, CIBA–Geneva’s sales of Ritalin rose 600 percent; and not because of the appetites of subsets of the species Speed Freak in San Francisco, either. It was because an entire generation of American boys, from the best private schools of the Northeast to the worst sludge–trap public schools of Los Angeles and San Diego, was now strung out on methylphenidate, diligently doled out to them every day by their connection, the school nurse. America is a wonderful country! I mean it! No honest writer would challenge that statement! The human comedy never runs out of material! It never lets you down!

Meantime, the notion of a self—a self who exercises self–discipline, postpones gratification, curbs the sexual appetite, stops short of aggression and criminal behavior—a self who can become more intelligent and lift itself to the very peaks of life by its own bootstraps through study, practice, perseverance, and refusal to give up in the face of great odds—this old–fashioned notion (what’s a boot strap, for God’s sake?) of success through enterprise and true grit is already slipping away, slipping away…slipping away…The peculiarly American faith in the power of the individual to transform himself from a helpless cypher into a giant among men, a faith that ran from Emerson (“Self–Reliance”) to Horatio Alger’s Luck and Pluck stories to Dale Carnegie’s How to Win Friends and Influence People to Norman Vincent Peale’s The Power of Positive Thinking to Og Mandino’s The Greatest Salesman in the World —that faith is now as moribund as the god for whom Nietzsche wrote an obituary in 1882. It lives on today only in the decrepit form of the “motivational talk,” as lecture agents refer to it, given by retired football stars such as Fran Tarkenton to audiences of businessmen, most of them woulda–been athletes (like the author of this article), about how life is like a football game. “It’s late in the fourth period and you’re down by thirteen points and the Cowboys got you hemmed in on your own one–yard line and it’s third and twenty–three. Whaddaya do?…”

Sorry, Fran, but it’s third and twenty–three and the genetic fix is in, and the new message is now being pumped out into the popular press and onto television at a stupefying rate. Who are the pumps? They are a new breed who call themselves “evolutionary psychologists.” You can be sure that twenty years ago the same people would have been calling themselves Freudian; but today they are genetic determinists, and the press has a voracious appetite for whatever they come up with.

The most popular study currently—it is still being featured on television news shows, months later—is David Lykken and Auke Tellegen’s study at the University of Minnesota of two thousand twins that shows, according to these two evolutionary psychologists, that an individual’s happiness is largely genetic. Some people are hardwired to be happy and some are not. Success (or failure) in matters of love, money, reputation, or power is transient stuff; you soon settle back down (or up) to the level of happiness you were born with genetically. Three months ago Fortune devoted a long takeout, elaborately illustrated, of a study by evolutionary psychologists at Britain’s University of Saint Andrews showing that you judge the facial beauty or handsomeness of people you meet not by any social standards of the age you live in but by criteria hardwired in your brain from the moment you were born. Or, to put it another way, beauty is not in the eye of the beholder but embedded in his genes. In fact, today, in the year 1996, barely three years before the end of the millennium, if your appetite for newspapers, magazines, and television is big enough, you will quickly get the impression that there is nothing in your life, including the fat content of your body, that is not genetically predetermined. If I may mention just a few things the evolutionary psychologists have illuminated for me over the past two months:

The male of the human species is genetically hardwired to be polygamous, i.e., unfaithful to his legal mate. Any magazine–reading male gets the picture soon enough. (Three million years of evolution made me do it!) Women lust after male celebrities, because they are genetically hardwired to sense that alpha males will take better care of their offspring. (I’m just a lifeguard in the gene pool, honey.) Teenage girls are genetically hardwired to be promiscuous and are as helpless to stop themselves as dogs in the park. (The school provides the condoms.) Most murders are the result of genetically hardwired compulsions. (Convicts can read, too, and they report to the prison psychiatrist: “Something came over me…and then the knife went in.” 2)

Where does that leave self–control? Where, indeed, if people believe this ghostly self does not even exist, and brain imaging proves it, once and for all?

So far, neuroscientific theory is based largely on indirect evidence, from studies of animals or of how a normal brain changes when it is invaded (by accidents, disease, radical surgery, or experimental needles). Darwin II himself, Edward O. Wilson, has only limited direct knowledge of the human brain. He is a zoologist, not a neurologist, and his theories are extrapolations from the exhaustive work he has done in his specialty, the study of insects. The French surgeon Paul Broca discovered Broca’s area, one of the two speech centers of the left hemisphere of the brain, only after one of his patients suffered a stroke. Even the PET scan and the PET reporter gene/PET reporter probe are technically medical invasions, since they require the injection of chemicals or viruses into the body. But they offer glimpses of what the noninvasive imaging of the future will probably look like. A neuroradiologist can read a list of topics out loud to a person being given a PET scan, topics pertaining to sports, music, business, history, whatever, and when he finally hits one the person is interested in, a particular area of the cerebral cortex actually lights up on the screen. Eventually, as brain imaging is refined, the picture may become as clear and complete as those see–through exhibitions, at auto shows, of the inner workings of the internal combustion engine. At that point it may become obvious to everyone that all we are looking at is a piece of machinery, an analog chemical computer, that processes information from the environment. “All,” since you can look and look and you will not find any ghostly self inside, or any mind, or any soul.

Thereupon, in the year 2006 or 2026, some new Nietzsche will step forward to announce: “The self is dead”—except that being prone to the poetic, like Nietzsche I, he will probably say: “The soul is dead.” He will say that he is merely bringing the news, the news of the greatest event of the millennium: “The soul, that last refuge of values, is dead, because educated people no longer believe it exists.” Unless the assurances of the Wilsons and the Dennetts and the Dawkinses also start rippling out, the lurid carnival that will ensue may make the phrase “the total eclipse of all values” seem tame.

The two most fascinating riddles of the 21st century
If I were a college student today, I don’t think I could resist going into neuroscience. Here we have the two most fascinating riddles of the twenty–first century: the riddle of the human mind and the riddle of what happens to the human mind when it comes to know itself absolutely. In any case, we live in an age in which it is impossible and pointless to avert your eyes from the truth.

Ironically, said Nietzsche, this unflinching eye for truth, this zest for skepticism, is the legacy of Christianity (for complicated reasons that needn’t detain us here). Then he added one final and perhaps ultimate piece of irony in a fragmentary passage in a notebook shortly before he lost his mind (to the late–nineteenth–century’s great venereal scourge, syphilis). He predicted that eventually modern science would turn its juggernaut of skepticism upon itself, question the validity of its own foundations, tear them apart, and self–destruct. I thought about that in the summer of 1994 when a group of mathematicians and computer scientists held a conference at the Santa Fe Institute on “Limits to Scientific Knowledge.” The consensus was that since the human mind is, after all, an entirely physical apparatus, a form of computer, the product of a particular genetic history, it is finite in its capabilities. Being finite, hardwired, it will probably never have the power to comprehend human existence in any complete way. It would be as if a group of dogs were to call a conference to try to understand The Dog. They could try as hard as they wanted, but they wouldn’t get very far. Dogs can communicate only about forty notions, all of them primitive, and they can’t record anything. The project would be doomed from the start. The human brain is far superior to the dog’s, but it is limited nonetheless. So any hope of human beings arriving at some final, complete, self–enclosed theory of human existence is doomed, too.

This, science’s Ultimate Skepticism, has been spreading ever since then. Over the past two years even Darwinism, a sacred tenet among American scientists for the past seventy years, has been beset by…doubts. Scientists—not religiosi—notably the mathematician David Berlinski (“The Deniable Darwin,” Commentary, June 1996) and the biochemist Michael Behe (Darwin’s Black Box, 1996), have begun attacking Darwinism as a mere theory, not a scientific discovery, a theory woefully unsupported by fossil evidence and featuring, at the core of its logic, sheer mush. (Dennett and Dawkins, for whom Darwin is the Only Begotten, the Messiah, are already screaming. They’re beside themselves, utterly apoplectic. Wilson, the giant, keeping his cool, has remained above the battle.) By 1990 the physicist Petr Beckmann of the University of Colorado had already begun going after Einstein. He greatly admired Einstein for his famous equation of matter and energy, E=mc2, but called his theory of relativity mostly absurd and grotesquely untestable. Beckmann died in 1993. His Fool Killer’s cudgel has been taken up by Howard Hayden of the University of Connecticut, who has many admirers among the upcoming generation of Ultimately Skeptical young physicists. The scorn the new breed heaps upon quantum mechanics (“has no real–world applications”…”depends entirely on fairies sprinkling goofball equations in your eyes”), Unified Field Theory (“Nobel worm bait”), and the Big Bang Theory (“creationism for nerds”) has become withering. If only Nietzsche were alive! He would have relished every minute of it!

Recently I happened to be talking to a prominent California geologist, and she told me: “When I first went into geology, we all thought that in science you create a solid layer of findings, through experiment and careful investigation, and then you add a second layer, like a second layer of bricks, all very carefully, and so on. Occasionally some adventurous scientist stacks the bricks up in towers, and these towers turn out to be insubstantial and they get torn down, and you proceed again with the careful layers. But we now realize that the very first layers aren’t even resting on solid ground. They are balanced on bubbles, on concepts that are full of air, and those bubbles are being burst today, one after the other.”

I suddenly had a picture of the entire astonishing edifice collapsing and modern man plunging headlong back into the primordial ooze. He’s floundering, sloshing about, gulping for air, frantically treading ooze, when he feels something huge and smooth swim beneath him and boost him up, like some almighty dolphin. He can’t see it, but he’s much impressed. He names it God.

About Luke Ford

I've written five books (see Amazon.com). My work has been covered in the New York Times, the Los Angeles Times, and on 60 Minutes. I teach Alexander Technique in Beverly Hills (Alexander90210.com).
This entry was posted in Brain, IQ. Bookmark the permalink.