Raymond Wolters: My Journey To Race Realism

By Raymond Wolters (author of The Burden of Brown: Thirty Years of School Desegregation (1984), Du Bois and His Rivals (2005), The New Negro on Campus: Black College Rebellions of the 1920s (1975), Right Turn: William Bradford Reynolds, The Reagan Administration, and Black Civil Rights (1996), Race and Education, 1954-2007 (2009), and The Long Crusade: Profiles in Education Reform, 1967-2014 (2015).)

From 1944 until 1956, I attended Catholic schools in Los Angeles County. I mention this because the Catholic school curriculum of the 1940s and 1950s differed from the standard fare in public schools. Both school systems taught students to read, write, and cipher. But many public schools also adopted a social science curriculum that was intended to combat racial prejudice by showing that race was a historical and social construction, or myth, rather than a meaningful way to describe human diversity. Designed by Franz Boas and his disciples, this curriculum maintained that there was no scientific proof that race influenced the distribution of behavior, intelligence, or personality; that culture was more influential than race in determining the behavior of individuals; that “opportunities” were more important than “biology” in determining success. Boas acknowledged that his program, which was funded by the American Jewish Committee (AJC), was a form of propaganda. But so many schools adopted this curriculum between the late-1930s and the mid-1950s that Boas succeeded in revising the prevailing views about race. He showed, as Margaret Mead has noted, that it was “well within the power of educational leaders” to change “the climate of opinion in which young people are reared, a change which will inevitably have profound reverberations . . . “ Some public school parents and students continued to believe that racial differences were important, but these parents and students also learned what teachers expected them to say about human diversity; and to recognize that adhering to the Boasian gospel would help them avoid a career damaging faux pas.

Those of us who attended Catholic schools, on the other hand, were not taught to mimick Boas. The leaders of our Church (especially in Los Angeles) associated intercultural agitation with the propaganda of the Communist Party, and my teachers, while insisting that all individuals should be treated with respect, said little about racial matters and nothing about social construction. In grade school the nuns taught reading, writing, and arithmetic, drilled students on the Baltimore Catechism, and taught us something about the history of our Church. Then, in my high school, the curriculum was especially strong in math and foreign languages. All students were required to take at least two years of Latin (and two years of a modern language as well) and four years of math. In addition, there were four years of religion and English, three years of history, two years of science, and a year of civics. We learned the geometry of Euclid and were told about the theology of Thomas Aquinas. But we heard nary a word about Franz Boas and his disciples.

Thinking back, I recognize that my Catholic education was so prescriptive that it allowed little opportunity for what is now called “critical thinking.” But I received a sound foundation in the basics, and I benefited especially from a strong sense of community, an esprit de corps. This was especially true at my high school, St. Francis High School near Pasadena, where the parents’ auxiliary organized activities to support the school; where our students cheered on our athletic teams, the Golden Knights, as they sallied forth to battle with other schools. Most of the teachers at St. Francis were Capuchin priests from Ireland, and never since have I known a better company of men. The whole scene still reminds me of Camelot, King Arthur, and his Knights of the Round Table.

Meanwhile, at home I learned lessons that, while hardly “race realist,” were at odds with the Boasian approach to racial matters. My parents did not say much about the relative importance of “culture” and “biology,” but they and their friends regarded blacks as a distinctive group that should be kept at a distance socially. They allowed for individual exceptions, but they thought most blacks were less intelligent than whites or Asians, and less capable, perhaps even incapable, of developing or maintaining an advanced civilization. They gave me to understand that there were boundaries, the most significant being a taboo against inter-racial dating or marriage. At the same time, however, my parents and their friends also believed that blacks were often treated unfairly, and they wanted to end the abuses. They were opposed to flagrant discrimination against black workers or to disparaging the Negro race. They were opposed to laws that required segregation.

Any tension that might have resulted from this ambivalence was obscured at the time, for America was a predominantly white country in the 1940s and 1950s and everyone assumed it would remain that way. In these circumstances, the significant adults in my world believed that the courtesies of life could be exchanged across the color line without any danger to the established order. Formal segregation was not needed, but only because there was an informal consensus that blacks and whites, while interacting in public life and in the world of work (to some extent), should live in different neighborhoods and remain separate in most things social.

I was exposed to different views during the years when I was an undergraduate student at Stanford and a graduate student at Berkeley, 1956-1965. On these campuses the prevailing belief was that blacks had not just been treated unfairly but were, in addition, as capable as whites when it came to intellectual aptitude and the ability to develop and maintain civilizations. One of my favorite professors, Kenneth M. Stampp, summed up this view in memorable language, “Negroes are after all only white men with black skins, nothing more, nothing less.” Professor Stampp and most other professors subscribed to what is now called the Doctrine of Zero Group Differences – the DZGD. This is the idea that all important aptitudes are distributed equally among all large groups of people; and the complementary idea that if a sizeable group is under- or over-represented in any important way, the disparity is not due to inherent traits but to discrimination or to the accidents of history, climate, geography, or culture. My professors rejected the idea that Mother Nature – or Darwinian evolution – endowed different groups with different distributions of aptitude and ability.

Like many students at Berkeley in the 1960s, I participated in civil rights demonstrations. My first foray was in downtown Berkeley where I joined the Congress of Racial Equality (CORE) in picketing a Woolworth store. (I carried a sign that urged blacks, “Don’t Buy Where You Can’t Work.”) After that there were demonstrations in Oakland (where I joined other students who were urging upscale restaurants to hire blacks as waiters) and in San Francisco (where we demanded that car dealers employ blacks as salesmen). Then (as now), I thought it was a mistake for the government to discriminate for racial reasons; and I further thought that blacks should be given better opportunities to earn a living. At that time it never occurred to me that within a few decades the civil rights movement, and liberalism in general, would morph into a movement that would stigmatize white people, especially white men, while also censoring candid speech about race, gender, and sexuality.

In 1964-65, I supported the Free Speech Movement (FSM). I thought it was a mistake for the University of California to ban the advocacy of nonviolent illegal protests, such as marching without a permit or violating segregation ordinances. But I was a follower, not a leader. In 1964, while I was still a graduate student, I had received a faculty appointment as an Instructor in the Department of History. It was the lowest rank for a member of the faculty, but it was a faculty appointment and I knew I would receive unwelcome publicity if I were identified as a faculty member who was active in the leadership of the FSM. I also stayed back because I was far from a radical. I supported free speech, but I sensed that the leaders of the FSM hoped to achieve some sort of cultural transformation that I did not begin to understand. I was shocked to learn that pornographic homosexual movies were shown inside Sproul Hall when FSM students “occupied” the building on December 2, 1964.

Despite some reservations, during the early-1960s I was in tune with the prevailing ethos in Berkeley. When it came time for me to write a dissertation, I asked Charles Sellers to be my major professor, and I did so because Professor Sellers was more than an outstanding historian. He was also an activist who had been arrested as a “freedom rider” in Mississippi and later served as the president of the Berkeley chapter of the Congress of Racial Equality (CORE). At Berkeley, there were two ways for professors to distinguish themselves. One was to win a major prize for research; the other was by lecturing effectively to very large classes of several hundred students. Sellers distinguished himself in both respects. His biography of President James K. Polk had been awarded a Bancroft Prize, and his lectures in the introductory survey of American history were often interrupted with applause from the students. I was fortunate when Sellers agreed to supervise my dissertation on how African Americans were affected by the policies of Franklin D. Roosevelt’s New Deal — a topic somewhat distant from his own specialty, the age of Andrew Jackson. My choice of a dissertation topic was influenced by sympathy for the civil rights movement, to be sure. But there was more. I had left Pasadena for Berkeley, but I retained many traditional, middle-class values. I wanted to “get ahead,” and I calculated that a specialty in race relations would be a good way to “make it” in academe. I was betting that racial topics would be in vogue during the years to come.

Meanwhile, I had a personal life. In 1962 I married a young woman, Mary McCullough, who had been reared as a Southern Baptist and had graduated from Berkeley with a major in mathematics and a Phi Beta Kappa key. Mary then taught high school math for almost 30 years, while also taking care of our house and our three sons. For more than 50 years she has been a tremendous source of love and support. When Mary and I were wed we had moved away from the religious milieu of our youths, but we were married in a Catholic church, and we never embraced even a smidgen of the lifestyle liberalism that was coming into fashion. We cringed when we heard about feminist consciousness-raising sessions. When we were blessed with children, we spent time with traditional activities like scouts and youth sports. We sent our children to a predominantly white Catholic high school and, after having lapsed for several years, we began to attend Mass every Sunday.

In the 1960s and 1970s I forged through the academic ranks. My dissertation received favorable notice when it was published in 1970, and another book of 1975 received even better reviews. At the age of 36, I was promoted to the rank of full professor at the University of Delaware, and I began to think about research for yet another book. At that time, civil rights lawyers had brought a lawsuit seeking metropolitan busing for racial balance throughout the northern portion of New Castle County, Delaware. From reading the local newspaper, I learned that the largest city in this region, Wilmington, had been one of the first five jurisdictions that the Supreme Court, in Brown v. Topeka Board of Education (1954), had ordered to desegregate its public schools. Wilmington complied immediately, but desegregation led to inter-racial scuffles and a decline in cultural and academic standards. This touched off white flight, and enrollment in Wilmington’s public schools tipped from 73% white to 90% black. I then learned that much the same had happened in three of the four other “Brown districts” – in Prince Edward County, Virginia, in Summerton, South Carolina, and in Washington D.C. Only in Topeka, Kansas, where blacks made up only 8% of the students, had the majority of whites continued to patronize the public schools. And desegregation had been problematic even in Topeka.

In my best-known book, The Burden of Brown (1984), I told the story of how public education had fared in these five districts where desegregation began. In the introduction and conclusion, and in a few statements that were interspersed in the text, I maintained that the misbehavior of black students had created serious problems and that federal judges had made matters worse by redefining desegregation to mean something quite different from the original understanding. When the implementation order for Brown was handed down in 1955, the Supreme Court defined “desegregation” as assigning students to public schools on “a racially non-discriminatory basis.” Similarly, in the Civil Rights Act of 1964, Congress defined what “desegregation” meant and what it did not mean: “’Desegregation’ means the assignment of students to public schools and within such schools without regard to their race, color, religion, or national origin, but ‘desegregation’ shall not mean the assignment of students to public schools in order to overcome racial imbalance.”

Beginning in the mid-1960s, however, and continuing for about 25 years, federal judges required assignment by race to ensure that the mix of races at individual schools would be approximately the same as the proportions that existed in a larger region or state. The constitutional mandate was changed from prohibiting racial discrimination to separate the races to requiring racial discrimination to mix them. “’Desegregation” was re-defined to mean something quite different from what the 1954 Brown opinion had required; something that the 1964 Civil Rights Act had specifically said desegregation did not mean.

By the 1980s liberals were dominant in academia and in the national culture (although not in American politics at that time), and academic liberals had coalesced in support of the idea that racial justice required that students should be assigned by race to create racially balanced, white majority schools. The rationale for this was articulated most influentially by sociologist James S. Coleman. It held that the quality of a school depended largely on its youth culture and that middle-class schools were better. Since “white” was presumed to be synonymous with “middle class” and “black” the same as “lower class,” the purpose of integration was to create schools with enough middle-class white students to shape the prevailing attitudes and a substantial number of lower-class black children to benefit from being exposed to peers who recognized the importance of school work.

It took some courage – or maybe just contrariness – for me to challenge the liberal consensus. But I pointed out that careful studies of test scores did not support Professor Coleman’s theory; that, in fact, the evidence was so strong that Coleman eventually conceded that his theory was, in his own words, mistaken “wishful thinking.”

I noted in addition that busing for racial balance was at odds with the original understanding of Brown, and with the arguments that Thurgood Marshall and other black advocates had presented when the case was before the Supreme Court. In their oral arguments for the black plaintiffs, Marshall and other lawyers for the NAACP had said they were “Not asking for affirmative relief. . . . The only thing that we ask for is that the state-imposed racial segregation by taken off, and to leave the county school board . . . to assign children on any reasonable basis they want to assign them on . . . What we want from this Court is the striking down of race . . . Do not deny any child the right to go to the school of his choice on the grounds of race or color. . . . Do not assign them on the basis of race. If you have some other basis . . . any other basis, we have no objection. But just do not put in race or color as a factor.”

There was some criticism of my book. That was to be expected, since upholding the unalloyed benefits of racially-balanced integration had become an unofficial party line in academia and in much of the mainstream media. But most reviews were favorable, and in 1985 the American Bar Association gave its annual Silver Gavel Award to The Burden of Brown. Civil rights activists picketed the awards dinner, but the awards committee stood by its decision, saying that “the award was made for literary merit and for shedding interesting light on legal history and issues . . . “

The Burden of Brown was written in sober prose that befit a scholarly book published by a university press. But the book was prompted by a profound sense of grievance. At the time, my wife and I were parents of young children who were happily attending good public schools in our predominantly-white city (Newark, Delaware). But the courts’ redefinition of “desegregation” required our children to go to school with sizeable numbers of black students from Wilmington, which for decades had had one of America’s highest rates for robbery and murder, for rape and illegitimacy. My wife and I had no problem when our children attended schools with black students from Newark. But we instinctively knew we could not allow the assignment of our children to classes where 30 percent of the students would be children from the ghettoes of Wilmington. The more we heard about the ways “desegregation” was unfolding, the greater the threat to our children loomed. One of the math teachers at Newark High School, a woman who happened to be the wife of the chairman of my university’s history department, told me that so many black students roamed through the halls while classes were in session that “You’d think our school had become all black.” Another faculty wife, a physics teacher, told of the defiance she encountered when she came upon students who had stolen balloons and aluminum that were needed for an experiment. A third teacher, a veteran with seventeen years’ experience in the classroom, quite teaching after she heard the school superintendent say that “desegregation” was going well. The superintendent reminded her of “a general behind the lines who doesn’t really know what is happening to the soldiers on the front line.” She offered the following description of conditions at her school.

“The problems involving discipline are so acute in many classes that the teacher’s role is changing to that of a policeman with none of the safeguards. The problems of vandalism, violence, drug use and thievery seem to get worse each year. Cheating and copying have become endemic diseases. Graffiti appear on the walls, desks, and books. Litter piles up – a daily burden for overworked janitors. Too many students are absent or late for class. Noise in many classrooms has to be experienced to be believed. There is constant gossiping, bickering, intermingled with threats and obscenities.”

The more my wife and I heard about the ways “desegregation” was unfolding, the more we recognized that, because of busing for racial balance, we would have to pay the tuition for sending our children to private schools. Supreme Court Justice Lewis Powell diplomatically understated the resentment when he acknowledged that many parents regarded busing for racial balance as an interference with “the concept of community” and with the “liberty to direct the upbringing and education of children under their control.” There was tremendous resentment during the decade when court-ordered busing for racial balance was being debated and implemented in Delaware. Thousands of parents evaded “desegregation” either by moving elsewhere or by sending their children to private schools. Between 1971 and 1981, the enrollment of black students in the “desegregation area” remained steady at about 15,000, but white enrollments declined from 70,173 to 35,764.


Despite busing for racial balance, in the 1980s I was in the proverbial “catbird seat.” I was a tenured professor at a good university. I was on familiar terms with major historians of the United States. I was receiving invitations to speak at other universities in the United States and Canada. After I published my fourth book in 1996, I received an endowed professorship that was named in honor of Thomas Muncy Keith, a benefactor of the University of Delaware. Fortunately for me, my University of Delaware lagged many other schools when it came to the political indoctrination of students and to censoring professors who wrote or spoke in a manner that was deemed “insensitive.” It helped, I suppose, that The Burden of Brown equivocated when it came to accounting for the high incidence of misbehavior, and the low level of academic achievement, of most black students. In passing, I mentioned that this was true even of black students from well-to-do intact families. But I refused to accept biological explanations and mentioned instead the dysfunctional “constellation of values and attitudes” that made up black “culture.”

As chance would have it, I did not experience my first inkling of “race realism” until 1993, when my publisher sent me a review of a paperback edition of The Burden of Brown. The review appeared in American Renaissance, a magazine that was unknown to me then, and the comments were so lively and insightful that I read several back issues of the magazine. Reading these issues turned out to be a fateful step on what turned out to be my road to Damascus. American Renaissance exposed me to views that differed markedly from the egalitarian opinions that prevailed in the mainstream of academe. The magazine introduced me to controversial new fields of scholarship and to concepts that were, to put it colloquially, “news to me.” The most important of the concepts was something called “race realism.” The most influential of the new fields was evolutionary biology.

In a 2010 talk to the Black Law Students Association at the University of Pennsylvania, journalist John Derbyshire gave a succinct summary of evolutionary biology. According to Derbyshire, mankind “separated into two parts, 50, 60, or 70 thousand years ago, depending on which paleoanthropologist you ask. One part remained in Africa, the ancestral homeland. The other crossed into Southwest Asia, then split, and re-split, and re-split until there were human populations living in near-total reproductive isolation from each other in all parts of the world. This went on for hundreds of generations, causing the divergences we see today. Different types, as well as differences in behavior, intelligence, and personality are exactly what one would expect to observe when scrutinizing these different populations.” Derbyshire further explained, “If a species is divided into separate populations, and those populations are left in reproductive isolation from each other for many generations, they will diverge. If you return after several hundred generations have passed, you will observe that the various traits that characterize individuals of the species are now distributed at different frequencies in the various populations. After a few thousands of generations, the divergence of the populations will be so great they can no longer cross-breed; and that is the origin of species.”

After Derbyshire’s talk at the law school, one of the professors opined that most of the students were baffled and perplexed. “They had never heard anything like that before,” the professor said. Neither had I, prior to the 1990s. And at that time, I was so focused on archival research that I paid scant attention to evolutionary biology. “Theories come and go,” I said to myself. “The facts remain.” At that time I considered myself akin to a detective searching for facts. A reviewer of one of my books wrote that the volume was chock full of information and narrative but short on theories and arguments. The reviewer described my style as “the Sgt. Joe Friday school of analysis.” For those too young to remember the TV program Dragnet, Joe Friday was the hard-boiled detective known for his “Just the facts, Ma’am” style.

This reviewer characterized my approach correctly. In the 1960s, when I was choosing a topic for my doctoral dissertation, I settled on the New Deal and the Negro after another historian told me that at some point in the 1940s the leaders of the NAACP had deposited thousands of letters and other documents from the 1910s, 1920s, and 1930s in a New York warehouse that was tended by a single semi-retired teamster, a man named Max. As it turned out, Max did not understand why I or any other young person would be interested in these dusty papers. When he saw me typing away, copying the records, he would urge me to take the records back to my apartment in California. “Get a life,” Max told me. On one occasion Max said, “The Yankees are playing at the Stadium this afternoon. That’s where you should be.” Nevertheless, I persisted with my clerical work. I was 25 years old at the time, and I reckoned that even if I did not have the maturity to analyze the New Deal or to illuminate race relations, I could make a contribution to knowledge by summarizing the contents of those documents. I also recognized the probability that, eventually, those documents would wind up in a research library, where other scholars could check my citations.

In each of my next five books I prided myself for using documents that few other scholars, if any, had seen. I talked myself into the files of the prominent Communist Party theoretician Herbert Aptheker, who was the custodian of the papers of the black leader W. E. B. Du Bois. I did so at a time when Aptheker was denying access to those papers to almost every other historian. I cultivated the archivists and librarians at several black colleges. I curried favor with school board members in each of the five Brown school districts. I gained access to the papers of Ronald Reagan’s assistant attorney general for civil rights, William Bradford Reynolds. I was a digger, searching for new information in one collection after another. In the process I came to regard myself as “a real historian” whose method was different from – and better than — that of colleagues who spent their time reading and recycling information that was already available in one book or another. I derisively thought of them as “armchair historians.”

And then, around the year, 2010, my health began to fail. Shortly before that I had decided to write a book on modern efforts to reform public education. But when I made plans to interview the leading reformers I felt a shortness of breath. Not long after that, I needed increasing amounts of oxygen from what became my constant companion, my vade mecum, an oxygen tank. I then scrapped my plans to go on the road for interviews and for research in primary sources. Instead, I did the only thing I could do at the time. I became an armchair historian. I read a lot. When I spent five months in 2012, waiting for a lung transplant at the Duke University Medical Center, I discovered a trove of information on the World Wide Web. I also learned a great deal more about evolutionary biology.

One of the pioneering books on evolution was Sociobiology, which the Harvard naturalist E. O. Wilson published in 1975. At that time the reigning orthodoxy held that human evolution ceased about 50,000 years ago when the first humans migrated out of Africa. Sociobiology and other books, however, showed that evolution was continuing, with humans as well as animals and plants still adapting to different environments. It doing so, Sociobiology called the Doctrine of Zero Group Differences (the DZGD) into question. Many people therefore considered Sociobiology outrageous and anathema. On one occasion a student rushed to Wilson’s lectern and doused the professor with water while others cheered, “Wilson, you’re all wet.” Others chanted, “Racist Wilson, You can’t Hide, We Charge You with Genocide.” For more than a year, Wilson stayed away from meetings of Harvard’s biology department because some of his colleagues had made a point of ostracizing him for his theories.

Nevertheless, eventually Wilson was widely celebrated as the pioneering founder of a new academic field. When a 25th anniversary edition of Sociobiology was published in 2000 it was evident that Wilson’s theory appealed to many of the best minds in science. By then, Amazon.com listed 416 titles under “sociobiology” and 1,218 under “human evolution.”

Students of evolution, and also their cousins in genomic (DNA) studies, were charting a course that challenged the Doctrine of Zero Group Differences. In 2009, Marshall Poe, a history professor at the University of Iowa, noted that evolutionists and genomic researchers, were no longer “talking about skin, eye, or hair color.” They were “talking about intelligence, temperament, and a host of other traits.” They were saying, “The races . . . are differently abled in ways that really matter.” The work of University of Utah anthropologists Gregory Cochran and Henry Harpending was a good example. In The 10,000 Year Explosion (2009), they noted that people whose ancestors had a long experience living in agricultural communities eventually developed immunities to the diseases of domesticated livestock. And these people also were likely to have genes that fostered lactose tolerance. Cochran and Harpending said that nature also selected for traits of personality and character as well as for differences in physique and resistance to pathogens. They said that populations with long exposure to agriculture or commerce were likely to have experienced genetic mutations that made them more industrious, future-oriented, and orderly than populations that survived by hunting and foraging. They said the descendants of foragers were genetically disposed to be impulsive. They said natural selection had shaped human nature somewhat differently, depending on whether one’s long line of ancestors had lived in settled communities or as foragers. They speculated that “although impulsivity may have been adaptive in hunter-gatherer populations, selection pressures associated with agricultural living may have acted on some population gene pools by selecting for patience and the foregoing of immediate gratification.” In 2014, Cochran wrote that a person was an “idiot” (his word) if he or she thought “the optimum mental phenotype . . . [is] the same in tropical hunter-gatherers, arctic hunter-gatherers, Neolithic peasants, and medieval moneylenders.” Cochran insisted, “Natural selection must have generated significant differences between populations; differences whose consequences we see every day, and that have been copiously documented by psychometricians.”

Cochran and Harpending were expressing a view that has come to be known as the theory of gene-culture co-evolution. This theory holds that human beings influence evolution by creating new ways of life that have an effect on which sort of people Mother Nature chooses to survive and multiply. Gene-culture co-evolution maintains that different groups develop different cultures, and those cultures, in turn, have some bearing on the distribution of genes in a society. Depending on what is needed to thrive in different environments, people with different genes will be selected for survival. Over the course of millennia, different genes — including genes that influence behavior, intelligence, and personality (the BIP genes) – will blossom in some societies but not in others. The distribution and sequence of genes will vary, depending on whether a group lives in the tropics or the arctic; depending on whether a society is based on hunting, farming, commerce, or some other activity. The different populations will still have much in common, since they had the same ancestors in the prehistoric past. But genetic adaptations through the ages are not insignificant.

As genomic scientists mapped the human genome, some politicians weighed in. Bill Clinton, for example, celebrated the new research, saying it would “revolutionize the diagnosis, prevention, and treatment of most, if not all, human diseases.” This was an exaggeration. Genetic research has yet to revolutionize medicine, although this research has contributed to several important advances. At the same time, however, genetic research has also reinforced the theory that people of different continental ancestries have inherited more than different skin tones. Genomic research has shown that people from different regions have also inherited genetic dispositions that influence behavior, intelligence, and personality. James D. Watson, a Harvard biologist who received a 1961 Nobel Prize as the co-discoverer of the double helix structure of DNA, conceded that “many persons of goodwill” saw “only harm in our looking too closely at individual genetic essences.” They did not wish to “face up to facts that will likely change the way we look at ourselves.” But Watson predicted that science would continue its march, and the truth would emerge. Eventually, Americans would recognize that there was “no firm reason to anticipate that the intellectual capacities of peoples geographically separated in their evolutions should prove to have evolved identically. Our wanting to reserve equal power of reason to some universal heritage of humanity will not be enough to make it so.” Watson also said he was “inherently gloomy about the prospect of Africa because all our social policies are based on the fact that their intelligence is the same as ours – whereas all the testing says not really.”

For this last statement, in 2007, Watson was suspended from his positions at the Cold Spring Harbor Laboratory, which he had led for 39 years, first as director, then as president, and then as chancellor. But other researchers carried on. In his book A Troublesome Inheritance (2014), science writer Nicholas Wade has described much of this research, with special mention of the different continental distributions of genes that influence intelligence, impulsiveness, and aggression.

A particularly interesting case concerned Bruce Lahn, a professor of human genetics at the University of Chicago. In 2006, Lahn presented evidence to show that mutations that affected the size of the human brain had occurred in Asia and Europe but not in Africa. Dr. Lahn, an immigrant from China, did not understand American political correctness and considered it “a triumphant moment” when he published two articles in a highly regarded journal, Science, maintaining that DNA changes had taken hold and spread widely in Europe and Asia but were not common in sub-Saharan Africa. One magazine described Lahn’s research as “the moment anti-racists and egalitarians have dreaded.”

At first Lahn stood by his research, saying, “Society will have to grapple with some very difficult facts.” But Lahn had second thoughts after he learned more about the extent to which his research had touched a raw nerve. The University of Chicago abandoned a patent application it had filed to develop a test that would draw on Lahn’s work. And some of Lahn’s co-authors were uncomfortable with the publicity their work was receiving. Lahn then turned to other projects, saying that he had questions about “whether some knowledge might not be worth having.” This statement predictably drew criticism on the World Wide Web. “Welcome to the new Dark Ages,” one writer scoffed. Lahn had been made “to stand before the altar of equality and recant. The sun moves around the earth.”

The major media have continued to give their readers and listeners to understand that all groups have an equal distribution of potential talent. The Doctrine of Zero Group Differences prevails in the mainstream. But scientists increasingly acknowledge that different environments select for different traits. Summarizing a view that is gaining popularity, science writer Nicholas Wade reported that racial differences arose “because after the ancestral human population in Africa spread throughout the world . . . geographical barriers prevented interbreeding.” Consequently, “under the influence of natural selection . . . people . . . diverged away from the ancestral population, creating new races.” In A Troublesome Inheritance (2014), Wade summarized and discussed a large body of research that indicated that “evolution in different environments has led to different distributions of genes that influence not just skin color but also behavior, intelligence and personality.”

Before 2010, I was aware of evolutionary biology and evolutionary psychology. As mentioned, during the 1990s I began to read American Renaissance, and about the same time one of my chums from grade school and high school, a bank examiner named Gene Stelzer, bent my ear with comments about Darwinism. Gene was also the first person to call my attention to The Occidental Quarterly, a journal I later came to regard as an indispensable guide to understanding white racial consciousness. At the University of Delaware, education professor Bob Hampel kept me informed about some of the best recent books in his field, and social scientist Linda Gottfredson told me about gene-culture co-evolution. But from mainstream historians I heard and read nothing about Darwinism or the interaction of culture and genes, and my own written work was still based primarily on archival research. It was not until 2010, when I was laid low by lung failure and could no longer rummage through archives that I began to read deeply and to think seriously about evolutionary biology and evolutionary psychology. As it happened, at this time I was also thinking about the modern school reform movement, which since about 1990 had become, above all else, an effort to close the achievement gaps that show American blacks and Latinos lagging behind whites and Asians on standardized achievement tests.

In some ways, the reformers’ concern with test scores is surprising. In recent international comparisons, African Americans have done better on standardized tests than blacks in Africa or the Caribbean. Hispanic Americans have done better than Hispanics in Latin America. White Americans are doing better than students in other predominantly-white nations (except Finland). And Asian-American students have done as well as most students in Asia – and better than those in Korea or Japan. These results were achieved, moreover, at a time when an increasing proportion of American students were being reared in single-parent families and a growing proportion of parents did not speak English.

One might have expected much praise for America’s schools, but this was not the case. Instead of praising schools for America’s strong showing on international comparisons, school reformers blamed the schools for failing to eliminate differences in the average academic achievement of America’s different racial and ethnic groups. Reformers lamented that on most tests the racial and ethnic achievement gaps were almost as large as they had been in 1954, when the U.S. Supreme Court handed down its landmark decision on school desegregation, Brown v. Topeka Board of Education. Eighty-five percent of black students, and 75 per cent of Latinos, still scored below the median for white students. And because of this, reformers insisted that American schools and teachers had failed.

To overcome this failure, reformers initially insisted that more money should be spent for public schools that enrolled large numbers of black or Latino students. But the gaps persisted despite the equalization of funding. In fact, the gaps persisted even in areas where the expenditure for black and Latino students was larger than the expenditure for white and Asian students. In predominantly-black Kansas City, where the expenditure per student was increased spectacularly, scores on standard achievement tests actually declined.

When that happened, reformers blamed teachers for the persistence of racial and ethnic achievement gaps. Instead of acknowledging that even capable teachers would fail if students were not motivated or lacked an aptitude for school work, reformers insisted that things would be better if the American educational system were re-fashioned. Reformers said that racial and ethnic achievement gaps could be closed if there were better teachers, and the best way to get better teachers was to fire the teachers whose students made low scores on standardized tests, to hire replacements on probationary contracts, and to keep only those teachers who excel in raising the test scores of their students.

As they moved from one proposal to another, school reformers insisted that there were no important racial differences. They said race was “only skin deep.” And when “better teachers” failed to close the gaps, reformers adjusted once again and demanded that more government funds be spent on child care, on early childhood education, on pre-Kindergarten programs. They believed, as Professor Robert Weissberg has noted, that racial and ethnic achievement gaps could be closed, if only reformers could monitor the ways that black and Latino parents interacted with their two-, three- and four-year olds. Steve Sailer concluded that many reformers believed that black children should be “kept away from their families and raised by whites and middle-class blacks.”

Race realists and evolutionists, on the other hand, did not think that racial disparities in education could be eliminated. They believed, rather, that the disparities were, as John Derbyshire put it, “facts in the natural world, like the orbits of planets. They can’t be eliminated by social or political action.”

By the end of the Twentieth Century, a gulf separated race realists, evolutionary theorists, and most genomic scientists from school reformers, mainstream historians, the major media, and most social scientists and public leaders. Leading evolutionists and genomic scientists believed in the biological reality of race and also thought that racial and ethnic achievement gaps were inevitable products of evolutionary adaptations. These evolutionists and genomic scientists were, in John Derbyshire’s phrase, “Biologians.” On the other hand, most school reformers, public leaders, and mainstream writers were “Culturists.” They said human evolution stopped when our species emerged from Africa to populate the rest of the world. They maintained that the accidents of history and climate, culture and geography, account for any variation in the distribution of human characteristics. They said that, with the right sort of social reforms, ethnic and racial achievement gaps could be abolished.

I lack the scientific expertise to decide definitively in favor of either the Biologians or the Culturists, but I think the weight of the evidence favors the Biologians. One decisive factor for me is that Culturists insist that environmental factors – not just the above mentioned accidents of history and climate, culture and geography, but also poverty, family traditions, marital instability, discrimination, and white and Asian privilege — are entirely responsible for the racial and ethnic gaps in academic achievement. Culturists reject the idea of gene-culture co-evolution. They say that IQ and other inherent qualities are not in any way responsible for the gaps. Culturists are absolutists. Biologians, on the other hand, are “50-50 people.” Culturists think that culture is “everything.” Biologians concede that culture matters but insist that evolution and heredity also count.

The Biologian approach impresses me as more sensible, although as noted I do not have the scientific knowledge to decide this question. What I do have is an interest in telling stories that are based on research. With that in mind, I have written a history of school reform, The Long Crusade: Profiles in Education Reform (2015). Three of my chapters describe leading progressive reformers: Jonathan Kozol, Howard Gardner, and Theodore Sizer; three more chapters describe the work of educators who favor “back-to-basics” approaches: Chris Whittle, Robert Slavin, and E. D. Hirsch; another three chapters focus on reformers who are associated with Teach for America and its progeny; and the final chapters describe three critics of school reform, Diane Ravitch, John Derbyshire, and Robert Weissberg. Without exception, the reformers are Culturists, while one of the three critics (Derbyshire) is a Biologian and another (Weissbeg) leans that way.

The reformers have differed with one another in many respects. For all their differences, however, the progressive, traditional, and new-wave school reformers have one thing in common. They have failed to close, or even to significantly narrow, the racial and ethnic achievement gaps. As noted, at the time of Brown v. Topeka Board of Education (1954), about 85 per cent of black students, and 75 percent of Hispanics, scored below the median for white and Asian students. And in the 21st century, the disparities are just as great. The one exception is among black and Hispanic students who attend highly structured elementary and middle schools operated by the Knowledge is Power Program (KIPP) and some KIPP clones. These schools have managed to improve the test scores of their mostly-poor, predominantly-minority students. I believe this outstanding achievement is due, at least in part, to the fact that KIPP and the KIPP clones enroll students who are not representative of the generality of students in their neighborhoods. In general, the story of school reform is a story of many failures. One is reminded of Thomas Edison’s unsuccessful efforts to make synthetic rubber in his laboratory. After many, many botched efforts, Edison refused to admit he had failed. Instead, Edison said he had discovered “99 ways not to make synthetic rubber.” Eventually, synthetic rubber was produced, and it is possible that our desperate gap closers will also succeed some day. But I doubt it.

I was pleased with some early responses to an early draft of my book on school reformers. My friend and colleague, Gerard Mangone, sent me a note, saying the chapters were “very interesting,” but adding, “I’m not sure I perceive any evaluation by you.” The director of a university press similarly noted, “You have made a conscious decision not to organize the manuscript according to your own perspective on school reform.” These comments reminded me of the reviewer who, years before, had said my style reminded him of Sgt. Joe Friday and Dragnet: “Just the facts, ma’am.” The comments pleased me, for I wanted to be fair to the reformers, whom I consider well-intentioned. Yet the early comments also led me to recognize that highly intelligent readers were not “getting” a message that I thought was implicit in the text. So I made a point of adding a conclusion that candidly summarized my opinions.

As I searched for a publisher, however, I became aware of problems. One pertained to the current state of commercial publishing. At the outset of my search, a literary agent in New York told me that he needed to know my argument because, he said, “today, all books about policy are argument books.” He told me that, when it came to commercial presses, it would not suffice to relate what famous educators advocated or how their positions played out in the real world. I would have to show that the school reformers were either geniuses or unrealistic ideologues.

This emphasis on making an argument ran counter to my training. As a student I had been taught to approach topics with an open mind and to consider all the evidence, not just the evidence that supported a pre-conceived conclusion. For personal reasons, I also had a longstanding preference for the traditional form of historical writing, in which narration, evocation, and explanation are joined within a descriptive chronology. After I graduated from Stanford in 1960, my Mother and Father were pleased when I became a student at Boalt Hall, the law school at the University of California, Berkeley. But they were not at all pleased, to put it mildly, when I decided after only one month in law school, to transfer into Berkeley’s graduate program in History. My parents had both worked in business and neither understood why I wanted to become a history professor instead of a lawyer. As it happened, my Father was the best story teller I have ever known. He regaled clients and friends with his tales! Through the years I have wondered if I could have avoided some heated family arguments if I had said, “Dad, I just want to be more like you. I want to make my living telling stories. I’d rather tell stories than make arguments.”

As a teacher and professor, I have also had a longstanding interest in the theories of famous educators and school reformers, especially if they had interesting stories that deserved a straight telling. When I began to write about the movement for school reform, I just wanted to describe the reformers’ programs. I did not know where the story would lead. Eventually, I came to believe that the reformers’ efforts to close ethnic and racial achievement gaps were doomed by evolution – a deduction that I mentioned in an afterword. But I did not organize all my material around this point, for when I began the book I knew nothing about the theory of gene-culture co-evolution and I had not considered the consequences of groups living in reproductive isolation from one another. When I did reach a conclusion, it made little sense to reorganize my narrative. That would have made me appear more argumentative than I am; and the major commercial publishers almost certainly would not have been interested in a book that argued that efforts to close the racial and ethnic achievement gaps were doomed to fail. Judging from most recent books, the major publishing houses are indeed interested in argumentative books – but only if those books make a case for reform. Commercial publishers want inspiring books about disadvantaged youths who have overcome the odds. They want optimistic works about how achievement gaps can be closed and no group left behind. They want books that will make readers less determinist and more optimistic.

Another problem pertains to academic publishing. University presses will publish dispassionate narrative accounts on some topics. But political correctness has become dominant on most campuses, and most university presses are loath to publish any book that does not toe the culturist line on race or gender. The academic presses have learned a lesson from what has happened to leading scholars who have been punished for expressing views that the mainstream considers taboo. There is, for example, the dismissal of Professor James D. Watson at Cold Spring Harbor, which a writer for the New York Times defended with the preposterous statement that the Nobel Prize-winning discoverer of DNA “didn’t have a scientific leg to stand on.” There was the case of Berkeley professor Arthur Jensen, the world’s leading authority on IQ, who was required to have police protection while teaching some of his classes. There was the case of Lawrence Summers, who was forced out of the presidency of Harvard after he mentioned that fewer women than men had made extremely high scores on standardized mathematics tests. These scholars (and many others who could be mentioned) suffered reprisals for questioning the reigning educational doctrine – the Doctrine of Zero Group Differences.

The extent of the suppression is also illustrated in the treatment of journalists, novice scholars, and even students. William Saletan, a senior writer for the liberal webzine Slate, got into trouble in 2007 when he compared genetic research to an “oncoming train.” The first few cars, carrying medical discoveries, were “already in view,” Saletan wrote, and right behind loomed additional research that showed that, compared with whites, “blacks mature quickly . . . and develop teeth, strength, and dexterity earlier. They sit, crawl, walk and dress themselves earlier. They reach sexual maturity faster, and they have better eyesight.” There were additional carloads of research that identified genes that varied among populations, and Saletan thought it likely that some of these genes “affect mental traits.” “How this happened isn’t clear,” Saletan wrote, but he thought it possible that “genes for cognitive complexity became so crucial in some places that nature favored them over genes for speed and vision.” “Nature isn’t stupid,” Saletan wrote. “If Africans, Asians and Europeans evolved different genes, the reason is that their respective genes were suited to their respective environments.” Groups whose ancestors had evolved in frigid climes or high altitudes could be expected to differ from groups who traced their ancestry to the tropics or to sea level. Groups whose ancestors had been farmers or merchants for thousands of years could be expected to differ from groups with a long lineage of hunters and foragers.

Saletan said he wished that the New York Times had been correct when it told its readers that James D. Watson did not have “a scientific leg to stand on.” But this was not so. On the contrary, Saletan wrote, the time had come when egalitarians should “prepare for the possibility that equality of intelligence, in the sense of racial averages on tests, will turn out not to be true.” To drive his point home, Saletan suggested a historical parallel. He said that modern egalitarians were facing a challenge similar to the one that evolution once posed for Christian fundamentalists. At first, many Christians had denounced Darwin’s theory as, in the words of William Jennings Bryan, jeopardizing “the doctrine of brotherhood,” undermining “the sympathetic activities of a civilized society,” and “paralyzing the hope of reform.” Saletan believed the same values were under attack in modern America.” But this time, the threat is racial genetics, and the people struggling with it are liberals.” He predicted that those who disputed the [DNA] evidence eventually would be regarded as “liberal creationists.” He implied that empirical disinterested science had established a Biologian explanation for the IQ gap and that racial egalitarians were engaging in a sort of wishful, quasi-superstitious social scientism.

Many of Slate’s predominantly-liberal readers rejected Saletan’s assessment, just as many Christian fundamentalists once had rejected Darwinism. These readers flooded Slate with so many letters of complaint that the webzine’s editor, Jacob Weissberg, felt compelled to offer an explanation. Weissberg explained that because Saletan was a senior writer his articles were not edited. But henceforth, Weissberg said, Saletan’s submissions would be edited “carefully.”

Saletan “got off easy.” Others did not fare so well. Jason Richwine, a Harvard-trained researcher who worked for the Heritage Foundation, ran into trouble when a guardian of political correctness reported that Richwine, in his doctoral dissertation, had mentioned that immigrants from Mexico had an average IQ of 89. This was not news. During the previous generation, many studies had made the same point and most knowledgeable social scientists accepted Richwine’s estimate. Nevertheless, Richwine was forced to resign from his position at the Heritage Foundation, and he came in for severe criticism at Harvard. One group of graduate students said Richwine’s Ph.D. degree should be rescinded on the ground that researchers in the social sciences were required to “take a course to ensure that we will not harm our research subjects.” As interpreted by these graduate students, this requirement prohibited researchers from criticizing their subjects – or from mentioning any evidence that might be construed as criticism. Another 1,200 Harvard students – a group that apparently knew little about the persistence of racial and ethnic achievement gaps despite the expenditure of billions of dollars for school reforms – signed a petition that said, “If there are IQ differences across racial and ethnic groups, why didn’t Richwine call for better education programs that would level out our difference.”

Harvard was not exceptional in this respect. The climate of opinion at many other American universities also led students to understand that any mention of heredity was in poor taste; and to sense that they could ease their way to the top of the status heap by embracing egalitarian views and by manifesting special concern for uplifting underrepresented minority groups. But once again this point came to the fore with special poignancy at Harvard, in the aftermath of a 2009 dinner conversation among three law students. The topic of discussion had been affirmative action, and one of the students was Stephanie Grace, who had graduated from Princeton with honors after writing a senior thesis on race relations. During the conversation, Grace had questioned the view that racial difference in IQ had a genetic basis. Afterwards, however, Grace had second thoughts. “I just hate leaving things where I feel I misstated my position,” she wrote in an e-mail to her friends. “I absolutely do not rule out the possibility that African Americans are, on average, genetically predisposed to be less intelligent. . . . I think it is at least possible that African Americans are less intelligent on a genetic level, and I didn’t mean to shy away from that opinion at dinner.”

A few months later Grace had a falling out with one of the friends, and the previously private e-mail was sent to the Black Law School Association at Harvard and thirteen other schools. By that time Grace had become an editor of the Harvard Law Review and had been hired as a clerk for a federal judge. The BLSAs then demanded that Grace be fired, on the grounds that she would be in a position to oppress blacks if she was employed by a federal court.

Grace then learned what lay in store for people who questioned the idea that all groups have equal potential. The dean of the Harvard Law School, Martha Minow, denounced Grace for expressing skeptical open-mindedness on the topic of race and IQ. “I am writing,” Dean Minow stated in an open letter, to address an email message in which one of our students suggested that black people are genetically inferior to white people.” Dean Minow insisted that Grace’s comments did “not reflect the views of the school or the overwhelming majority of the members of this community. . . . Here at Harvard Law School, we are committed to preventing degradation of any individual or group, including race-based insensitivity or hostility.”

Rather than stand by her skepticism, Stephanie Grace apologized. “I am deeply sorry for the pain caused by my email. I never intended to cause any harm, and I am heartbroken and devastated by the harm that has ensued. I would give anything to take it back. I emphatically do not believe that Africans are genetically inferior in any way. I understand why my words expressing even a doubt in that regard were and are offensive.”


These incidents give some indication of the climate of opinion while I was writing my account of school reformers’ efforts to close the racial and ethnic achievement gaps. The major commercial publishers wanted argumentative books that purported to show how the problem could be solved. And most university presses did not wish to be associated books that were politically incorrect. I was fortunate, however, to find a publishing company, Washington Summit Publishers, that was willing to produce my book, The Long Crusade: Profiles in Education Reform, 1967-2014. It is scheduled for publication on August 28, 2015.

About Luke Ford

I've written five books (see Amazon.com). My work has been followed by the New York Times, the Los Angeles Times, and 60 Minutes. I teach Alexander Technique in Beverly Hills (Alexander90210.com).
This entry was posted in Blacks, Education, Race. Bookmark the permalink.