Link: We should hope—emphasis on the should—for a discipline of Actual Social Science, whose practitioners strive to report the truth, the whole truth, and nothing but the truth, with the same passionately dispassionate objectivity they might bring to the study of beetles, or algebraic topology—or that an alien superintelligence might bring to the study of humans.
We do not have a discipline of Actual Social Science. Possibly because we’re not smart enough to do it, but perhaps more so because we’re not smart enough to want to do it. No one has an incentive to lie about the homotopy groups of an n-sphere. If you’re asking questions about homotopy groups at all, you almost certainly care about getting the right answer for the right reasons. At most, you might be biased towards believing your own conjectures in the optimistic hope of achieving eternal algebraic-topology fame and glory, like Ruth Lawrence. But nothing about algebraic topology is going to be morally threatening in a way that will leave you fearing that your ideological enemies have seized control of the publishing-houses to plant lies in the textbooks to fuck with your head, or sobbing that a malicious God created the universe as a place of evil.
Okay, maybe that was a bad example; topology in general really is the kind of mindfuck that might be the design of an adversarial agency. (Remind me to tell you about the long line, which is like the line of real numbers, except much longer.)
In any case, as soon as we start to ask questions about humans—and far more so identifiable groups of humans—we end up entering the domain of politics.
We really shouldn’t. Everyone should perceive a common interest in true beliefs—maps that reflect the territory, simple theories that predict our observations—because beliefs that make accurate predictions are useful for making good decisions. That’s what “beliefs” are for, evolutionary speaking: my analogues in humanity’s environment of evolutionary adaptedness were better off believing that (say) the berries from some bush were good to eat if and only if the berries were actually good to eat. If my analogues unduly-optimistically thought the berries were good when they actually weren’t, they’d get sick (and lose fitness), but if they unduly-pessimistically thought the berries were not good when they actually were, they’d miss out on valuable calories (and fitness).
(Okay, this story is actually somewhat complicated by the fact that evolution didn’t “figure out” how to build brains that keep track of probability and utility separately: my analogues in the environment of evolutionary adaptedness might also have been better off assuming that a rustling in the bush was a tiger, even if it usually wasn’t a tiger, because failing to detect actual tigers was so much more costly (in terms of fitness) than erroneously “detecting” an imaginary tiger. But let this pass.)
The problem is that, while any individual should always want true beliefs for themselves in order to navigate the world, you might want others to have false beliefs in order to trick them into mis-navigating the world in a way that benefits you. If I’m trying to sell you a used car, then—counterintuitively—I might not want you to have accurate beliefs about the car, if that would reduce the sale price or result in no deal. If our analogues in the environment of evolutionary adaptedness regularly faced structurally similar situations, and if it’s expensive to maintain two sets of beliefs (the real map for ourselves, and a fake map for our victims), we might end up with a tendency not just to be lying motherfuckers who deceive others, but also to self-deceive in situations where the payoffs (in fitness) of tricking others outweighed those of being clear-sighted ourselves.
That’s why we’re not smart enough to want a discipline of Actual Social Science. The benefits of having a collective understanding of human behavior—a shared map that reflects the territory that we are—could be enormous, but beliefs about our own qualities, and those of socially-salient groups to which we belong (e.g., sex, race, and class) are exactly those for which we face the largest incentive to deceive and self-deceive. Counterintuitively, I might not want you to have accurate beliefs about the value of my friendship (or the disutility of my animosity), for the same reason that I might not want you to have accurate beliefs about the value of my used car. That makes it a lot harder not just to get the right answer for the reasons, but also to trust that your fellow so-called “scholars” are trying to get the right answer, rather than trying to sneak self-aggrandizing lies into the shared map in order to fuck you over. You can’t just write a friendly science book for oblivious science nerds about “things we know about some ways in which people are different from each other”, because almost no one is that oblivious. To write and be understood, you have to do some sort of positioning of how your work fits in to the war over the shared map.
Murray positions Human Diversity as a corrective to a “blank slate” orthodoxy that refuses to entertain any possibility of biological influences on psychological group differences. The three parts of the book are pitched not simply as “stuff we know about biologically-mediated group differences” (the oblivious-science-nerd approach that I would prefer), but as a rebuttal to “Gender Is a Social Construct”, “Race Is a Social Construct”, and “Class Is a Function of Privilege.” At the same time, however, Murray is careful to position his work as nonthreatening: “there are no monsters in the closet,” he writes, “no dread doors that we must fear opening.” He likewise “state[s] explicitly that [he] reject[s] claims that groups of people, be they sexes or races or classes, can be ranked from superior to inferior [or] that differences among groups have any relevance to human worth or dignity.”
I think this strategy is sympathetic but ultimately ineffective. Murray is trying to have it both ways: challenging the orthodoxy, while denying the possibility of any unfortunate implications of the orthodoxy being false. It’s like … theistic evolution: satisfactory as long as you don’t think about it too hard, but among those with a high need for cognition, who know what it’s like to truly believe (as I once believed), it’s not going to convince anyone who hasn’t already broken from the orthodoxy.
Murray concludes, “Above all, nothing we learn will threaten human equality properly understood.” I strongly agree with the moral sentiment, the underlying axiology that makes this seem like a good and wise thing to say.
And yet I have been … trained. Trained to instinctively apply my full powers of analytical rigor and skepticism to even that which is most sacred. Because my true loyalty is to the axiology—to the process underlying my current best guess as to that which is most sacred. If that which was believed to be most sacred turns out to not be entirely coherent … then we might have some philosophical work to do, to reformulate the sacred moral ideal in a way that’s actually coherent.
“Nothing we learn will threaten X properly understood.” When you elide the specific assignment X := “human equality”, the form of this statement is kind of suspicious, right? Why “properly understood”? It would be weird to say, “Nothing we learn will threaten the homotopy groups of an n-sphere properly understood.”
This kind of claim to be non-disprovable seems like the kind of thing you would only invent if you were secretly worried about X being threatened by new discoveries, and wanted to protect your ability to backtrack and re-gerrymander your definition of X to protect what you (think that you) currently believe.
If being an oblivious science nerd isn’t an option, half-measures won’t suffice. I think we can do better by going meta and analyzing the functions being served by the constraints on our discourse and seeking out clever self-aware strategies for satisfying those functions without lying about everything. We mustn’t fear opening the dread meta-door in front of whether there actually are dread doors that we must fear opening.
Why is the blank slate doctrine so compelling, that so many feel the need to protect it at all costs? (As I once felt the need.) It’s not … if you’ve read this far, I assume you will forgive me—it’s not scientifically compelling. If you were studying humans the way an alien superintelligence would, trying to get the right answer for the right reasons (which can conclude conditional answers: if what humans are like depends on choices about what we teach our children, then there will still be a fact of the matter as to what choices lead to what outcomes), you wouldn’t put a whole lot of prior probability on the hypothesis “Both sexes and all ancestry-groupings of humans have the same distribution of psychological predispositions; any observed differences in behavior are solely attributable to differences in their environments.” Why would that be true? We know that sexual dimorphism exists. We know that reproductively isolated populations evolve different traits to adapt to their environments, like those birds with differently-shaped beaks that Darwin saw on his boat trip. We could certainly imagine that none of the relevant selection pressures on humans happened to touch the brain—but why? Wouldn’t that be kind of a weird coincidence?
If the blank slate doctrine isn’t scientifically compelling—it’s not something you would invent while trying to build shared maps that reflect the territory—then its appeal must have something to do with some function it plays in conflicts over the shared map, where no one trusts each other to be doing Actual Social Science rather than lying to fuck everyone else over.
And that’s where the blank slate doctrine absolutely shines—it’s the Schelling point for preventing group conflicts! (A Schelling point is a choice that’s salient as a focus for mutual expectations: what I think that you think that I think … &c. we’ll choose.) If you admit that there could differences between groups, you open up the questions of in what exact traits and of what exact magnitudes, which people have an incentive to lie about to divert resources and power to their group by establishing unfair conventions and then misrepresenting those contingent bargaining equilibria as some “inevitable” natural order.
If you’re afraid of purported answers being used as a pretext for oppression, you might hope to make the question un-askable. Can’t oppress people on the basis of race if race doesn’t exist! Denying the existence of sex is harder—which doesn’t stop people from occasionally trying. “I realize I am writing in an LGBT era when some argue that 63 distinct genders have been identified,” Murray notes at the beginning of Appendix 2. But this oblique acerbity fails to pass the Ideological Turing Test. The language of has been identified suggests an attempt at scientific taxonomy—a project, which I share with Murray, of fitting categories to describe a preexisting objective reality. But I don’t think the people making 63-item typeahead select “Gender” fields for websites are thinking in such terms to begin with. The specific number 63 is ridiculous and can’t exist; it might as well be, and often is, a fill-in-the-blank free text field. Despite being insanely evil (where I mean the adjective literally rather than as a generic intensifier—evil in a way that is of or related to insanity), I must acknowledge this is at least good game theory. If you don’t trust taxonomists to be acting in good faith—if you think we’re trying to bulldoze the territory to fit a preconceived map—then destroying the language that would be used to be build oppressive maps is a smart move.
The taboo mostly only applies to psychological trait differences, both because those are a sensitive subject, and because they’re easier to motivatedly see what you want to see: whereas things like height or skin tone can be directly seen and uncontroversially measured with well-understood physical instruments (like a meterstick or digital photo pixel values), psychological assessments are much more complicated and therefore hard to detach from the eye of the beholder. (If I describe Mary as “warm, compassionate, and agreeable”, the words mean something in the sense that they change what experiences you anticipate—if you believed my report, you would be surprised if Mary were to kick your dog and make fun of your nose job—but the things that they mean are a high-level statistical signal in behavior for which we don’t have a simple measurement device like a meterstick to appeal to if you and I don’t trust each other’s character assessments of Mary.)
Notice how the “not allowing sex and race differences in psychological traits to appear on shared maps is the Schelling point for resistance to sex- and race-based oppression” actually gives us an explanation for why one might reasonably have a sense that there are dread doors that we must not open. Undermining the “everyone is Actually Equal” Schelling point could catalyze a preference cascade—a slide down the slippery slope to the the next Schelling point, which might be a lot worse than the status quo on the “amount of rape and genocide” metric, even if it does slightly better on “estimating heritability coefficients.” The orthodoxy isn’t just being dumb for no reason. In analogy, Galileo and Darwin weren’t trying to undermine Christianity—they had much more interesting things to think about—but religious authorities were right to fear heliocentrism and evolution: if the prevailing coordination equilibrium depends on lies, then telling the truth is a threat and it is disloyal. And if the prevailing coordination equilibrium is basically good, then you can see why purported truth-tellers striking at the heart of the faith might be believed to be evil.
Murray opens the parts of the book about sex and race with acknowledgments of the injustice of historical patriarchy (“When the first wave of feminism in the United States got its start […] women were rebelling not against mere inequality, but against near-total legal subservience to men”) and racial oppression (“slavery experienced by Africans in the New World went far beyond legal constraints […] The freedom granted by emancipation in America was only marginally better in practice and the situation improved only slowly through the first half of the twentieth century”). It feels … defensive? (To his credit, Murray is generally pretty forthcoming about how the need to write “defensively” shaped the book, as in a sidebar in the introduction that says that he’s prefer to say a lot more about evopsych, but he chose to just focus on empirical findings in order to avoid the charge of telling just-so stories.)
But this kind of defensive half-measure satisfies no one. From the oblivious-science-nerd perspective—the view that agrees with Murray that “everyone should calm down”—you shouldn’t need to genuflect to the memory of some historical injustice before you’re allowed to talk about Science. But from the perspective that cares about Justice and not just Truth, an insincere gesture or a strategic concession is all the more dangerous insofar as it could function as camouflage for a nefarious hidden agenda. If your work is explicitly aimed at destroying the anti-oppression Schelling-point belief, a few hand-wringing historical interludes and bromides about human equality having no testable implications (!!) aren’t going to clear you of the suspicion that you’re doing it on purpose—trying to destroy the anti-oppression Schelling point in order to oppress, and not because anything that can be destroyed by the truth, should be.
And sufficient suspicion makes communication nearly impossible. (If you know someone is lying, their words mean nothing, not even as the opposite of the truth.) As far as many of Murray’s detractors are concerned, it almost doesn’t matter what the text of Human Diversity says, how meticulously researched of a psychology/neuroscience/genetics lit review it is. From their perspective, Murray is “hiding the ball”: they’re not mad about this book; they’re mad about specifically chapters 13 and 14 of a book Murray coauthored twenty-five years ago. (I don’t think I’m claiming to be a mind-reader here; the first 20% of The New York Times’s review of Human Diversity is pretty explicit and representative.)
In 1994’s The Bell Curve: Intelligence and Class Structure in American Life, Murray and coauthor Richard J. Herrnstein argued that a lot of variation in life outcomes is explained by variation in intelligence. Some people think that folk concepts of “intelligence” or being “smart” are ill-defined and therefore not a proper object of scientific study. But that hasn’t stopped some psychologists from trying to construct tests purporting to measure an “intelligence quotient” (or IQ for short). It turns out that if you give people a bunch of different mental tests, the results all positively correlate with each other: people who are good at one mental task, like listening to a list of numbers and repeating them backwards (“reverse digit span”), are also good at others, like knowing what words mean (“vocabulary”). There’s a lot of fancy linear algebra involved, but basically, you can visualize people’s test results as a hyperellipsoid in some high-dimensional space where the dimensions are the different tests. (I rely on this “configuration space” visual metaphor so much for so many things that when I started my secret (“secret”) gender blog, it felt right to put it under a .space TLD.) The longest axis of the hyperellipsoid corresponds to the “g factor” of “general” intelligence—the choice of axis that cuts through the most variance in mental abilities.
It’s important not to overinterpret the g factor as some unitary essence of intelligence rather than the length of a hyperellipsoid. It seems likely that if you gave people a bunch of physical tests, they would positively correlate with each other, such that you could extract a “general factor of athleticism”. (It would be really interesting if anyone’s actually done this using the same methodology used to construct IQ tests!) But athleticism is going to be an very “coarse” construct for which the tails come apart: for example, world champion 100-meter sprinter Usain Bolt’s best time in the 800 meters is reportedly only around 2:10 or 2:07! (For comparison, I ran a 2:08.3 in high school once!)
Anyway, so Murray and Herrnstein talk about this “intelligence” construct, and how it’s heritable, and how it predicts income, school success, not being a criminal, &c., and how Society is becoming increasingly stratified by cognitive abilities, as school credentials become the ticket to the new upper class.
This should just be more social-science nerd stuff, the sort of thing that would only draw your attention if, like me, you feel bad about not being smart enough to do algebraic topology and want to console yourself by at least knowing about the Science of not being smart enough to do algebraic topology. The reason everyone and her dog is still mad at Charles Murray a quarter of a century later is Chapter 13, “Ethnic Differences in Cognitive Ability”, and Chapter 14, “Ethnic Inequalities in Relation to IQ”. So, apparently, different ethnic/”racial” groups have different average scores on IQ tests. Ashkenazi Jews do the best, which is why I sometimes privately joke that the fact that I’m only 85% Ashkenazi (according to 23andMe) explains my low IQ. (I got a 131 on the WISC-III at age 10, but that’s pretty dumb compared to some of my robot-cult friends.) East Asians do a little better than Europeans/”whites”. And—this is the part that no one is happy about—the difference between U.S. whites and U.S. blacks is about Cohen’s d ≈ 1. (If two groups differ by d = 1 on some measurement that’s normally distributed within each group, that means that the mean of the group with the lower average measurement is at the 16th percentile of the group with the higher average measurement, or that a uniformly-randomly selected member of the group with the higher average measurement has a probability of about 0.76 of having a higher measurement than a uniformly-randomly selected member of the group with the lower average measurement.)
Given the tendency for people to distort shared maps for political reasons, you can see why this is a hotly contentious line of research. Even if you take the test numbers at face value, racists trying to secure unjust privileges for groups that score well, have an incentive to “play up” group IQ differences in bad faith even when they shouldn’t be relevant. As economist Glenn C. Loury points out in The Anatomy of Racial Inequality, cognitive abilities decline with age, and yet we don’t see a moral panic about the consequences of an aging workforce, because older people are construed by the white majority as an “us”—our mothers and fathers—rather than an outgroup. Individual differences in intelligence are also presumably less politically threatening because “smart people” as a group aren’t construed as a natural political coalition—although Murray’s work on cognitive class stratification would seem to suggest this intuition is mistaken.
It’s important not to overinterpret the IQ-scores-by-race results; there are a bunch of standard caveats that go here that everyone’s treatment of the topic needs to include. Again, just because variance in a trait is statistically associated with variance in genes within a population, does not mean that differences in that trait between populations are caused by genes: remember the illustrations about sun-deprived plants and internet-deprived red-haired children. Group differences in observed tested IQs are entirely compatible with a world in which those differences are entirely due to the environment imposed by an overtly or structurally racist society. Maybe the tests are culturally biased. Maybe people with higher socioeconomic status get more opportunities to develop their intellect, and racism impedes socio-economic mobility. And so on.
The problem is, a lot of the blank-slatey environmentally-caused-differences-only hypotheses for group IQ differences start to look less compelling when you look into the details. “Maybe the tests are biased”, for example, isn’t an insurmountable defeater to the entire endeavor of IQ testing—it is itself a falsifiable hypothesis, or can become one if you specify what you mean by “bias” in detail. One idea of what it would mean for a test to be biased is if it’s partially measuring something other than what it purports to be measuring: if your test measures a combination of “intelligence” and “submission to the hegemonic cultural dictates of the test-maker”, then individuals and groups that submit less to your cultural hegemony are going to score worse, and if you market your test as unbiasedly measuring intelligence, then people who believe your marketing copy will be misled into thinking that those who don’t submit are dumber than they really are. But if so, and if not all of your individual test questions are equally loaded on intelligence and cultural-hegemony, then the cultural bias should show up in the statistics. If some questions are more “fair” and others are relatively more culture-biased, then you would expect the order of item difficulties to differ by culture: the “item characteristic curve” plotting the probability of getting a biased question “right” as a function of overall test score should differ by culture, with the hegemonic group finding it “easier” and others finding it “harder”. Conversely, if the questions that discriminate most between differently-scoring cultural/ethnic/”racial” groups were the same as the questions that discriminate between (say) younger and older children within each group, that would be the kind of statistical clue you would expect to see if the test was unbiased and the group difference was real.
Hypotheses that accept IQ test results as unbiased, but attribute group differences in IQ to the environment, also make statistical predictions that could be falsified. Controlling for parental socioeconomic status only cuts the black–white gap by a third. (And note, on the hereditarian model, some of the correlation between parental SES and child outcomes is due to both being causally downstream of genes.) The mathematical relationship between between-group and within-group heritability means that the conjunction of wholly-environmentally-caused group differences, and the within-group heritability, makes quantitative predictions about how much the environments of the groups differ. Skin color is actually only controlled by a small number of alleles, so if you think Society’s discrimination on skin color causes IQ differences, you could maybe design a clever study that measures both overall-ancestry and skin color, and does statistics on what happens when they diverge. And so on.
In mentioning these arguments in passing, I’m not trying to provide a comprehensive lit review on the causality of group IQ differences. (That’s someone else’s blog.) I’m not (that?) interested in this particular topic, and without having mastered the technical literature, my assessment would be of little value. Rather, I am … doing some context-setting for the problem I am interested in, of fixing public discourse. The reason we can’t have an intellectually-honest public discussion about human biodiversity is because good people want to respect the anti-oppression Schelling point and are afraid of giving ammunition to racists and sexists in the war over the shared map. “Black people are, on average, genetically less intelligent than white people” is the kind of sentence that pretty much only racists would feel good about saying out loud, independently of its actual truth value. In a world where most speech is about manipulating shared maps for political advantage rather than getting the right answer for the right reasons, it is rational to infer that anyone who entertains such hypotheses is either motivated by racial malice, or is at least complicit with it—and that rational expectation isn’t easily canceled with a pro forma “But, but, civil discourse” or “But, but, the true meaning of Equality is unfalsifiable” disclaimer.
To speak to those who aren’t already oblivious science nerds—or are committed to emulating such, as it is scientifically dubious whether anyone is really that oblivious—you need to put more effort into your excuse for why you’re interested in these topics. Here’s mine, and it’s from the heart, though it’s up to the reader to judge for herself how credible I am when I say this—
I don’t want to be complicit with hatred or oppression. I want to stay loyal to the underlying egalitarian–individualist axiology that makes the blank slate doctrine sound like a good idea. But I also want to understand reality, to make sense of things. I want a world that’s not lying to me. Having to believe false things—or even just not being able say certain true things when they would otherwise be relevant—extracts a dire cost on our ability to make sense of the world, because you can’t just censor a few forbidden hypotheses—you have to censor everything that implies them, and everything that implies them: the more adept you are at making logical connections, the more of your mind you need to excise to stay in compliance.
We can’t talk about group differences, for fear that anyone arguing that differences exist is just trying to shore up oppression. But … structural oppression and actual group differences can both exist at the same time. They’re not contradicting each other! Like, the fact that men are physically stronger than women (on average, but the effect size is enormous, like d ≈ 2.6 for total muscle mass) is not unrelated to the persistence of patriarchy! (The ability to credibly threaten to physically overpower someone, gives the more powerful party a bargaining advantage, even if the threat is typically unrealized.) That doesn’t mean patriarchy is good; to think so would be to commit the naturalistic fallacy of attempting to derive an ought from an is. No one would say that famine and plague are good just because they, too, are subject to scientific explanation. This is pretty obvious, really? But similarly, genetically-mediated differences in cognitive repertoires between ancestral populations are probably going to be part of the explanation for why we see the particular forms of inequality and oppression that we do, just as a brute fact of history devoid of any particular moral significance, like how part of the explanation for why European conquest of the Americas happened earlier and went smoother for the invaders than the colonization of Africa, had to do with the disease burden going the other way (Native Americans were particularly vulnerable to smallpox, but Europeans were particularly vulnerable to malaria).
Again—obviously—is does not imply ought. In deference to the historically well-justified egalitarian fear that such hypotheses will primarily be abused by bad actors to portray their own group as “superior”, I suspect it’s helpful to dwell on science-fictional scenarios in which the boot of history is one’s own neck, if the boot does not happen to be on one’s own neck in real life. If a race of lavender humans from an alternate dimension were to come through a wormhole and invade our Earth and cruelly subjugate your people, you would probably be pretty angry, and maybe join a paramilitary group aimed at overthrowing lavender supremacy and re-instantiating civil rights. The possibility of a partially-biological explanation for why the purple bastards discovered wormhole generators when we didn’t (maybe they have d ≈ 1.8 on us in visuospatial skills, enabling their population to be first to “roll” a lucky genius (probably male) who could discover the wormhole field equations), would not make the conquest somehow justified.
I don’t know how to build a better world, but it seems like there are quite general grounds on which we should expect that it would be helpful to be able to talk about social problems in the language of cause and effect, with the austere objectivity of an engineering discipline. If you want to build a bridge (that will actually stay up), you need to study the “the careful textbooks [that] measure […] the load, the shock, the pressure [that] material can bear.” If you want to build a just Society (that will actually stay up), you need a discipline of Actual Social Science that can publish textbooks, and to get that, you need the ability to talk about basic facts about human existence and make simple logical and statistical inferences between them.
And no one can do it! (“Well for us, if even we, even for a moment, can get free our heart, and have our lips unchained—for that which seals them hath been deep-ordained!”) Individual scientists can get results in their respective narrow disciplines; Charles Murray can just barely summarize the science to a semi-popular audience without coming off as too overtly evil to modern egalitarian moral sensibilities. (At least, the smarter egalitarians? Or, maybe I’m just old.) But at least a couple aspects of reality are even worse (with respect to naïve, non-renormalized egalitarian moral sensibilities) than the ball-hiders like Murray can admit, having already blown their entire Overton budget explaining the relevant empirical findings.
Murray approvingly quotes Steven Pinker (a fellow ball-hider, though Pinker is better at it): “Equality is not the empirical claim that all groups of humans are interchangeable; it is the moral principle that individuals should not be judged or constrained by the average properties of their group.”
A fine sentiment. I emphatically agree with the underlying moral intuition that makes “Individuals should not be judged by group membership” sound like a correct moral principle—one cries out at the monstrous injustice of the individual being oppressed on the basis of mere stereotypes of what other people who look like them might statistically be like.
But can I take this literally as the exact statement of a moral principle? Technically?—no! That’s actually not how epistemology works! The proposed principle derives its moral force from the case of complete information: if you know for a fact that I have moral property P, then it would be monstrously unjust to treat me differently just because other people who look like me mostly don’t have moral property P. But in the real world, we often—usually—don’t have complete information about people, or even about ourselves.
Bayes’s theorem (just a few inferential steps away from the definition of conditional probability itself, barely worthy of being called a “theorem”) states that for hypothesis H and evidence E, P(H|E) = P(E|H)P(H)/P(E). This is the fundamental equation that governs all thought. When you think you see a tree, that’s really just your brain computing a high value for the probability of your sensory experiences given the hypothesis that there is a tree, multiplied by the prior probability that there is a tree, as a fraction of all the possible worlds that could be generating your sensory experiences.
What goes for seeing trees, goes the same for “treating individuals as individuals”: the process of getting to know someone as an individual, involves your brain exploiting the statistical relationships between what you observe, and what you’re trying to learn about. If you see someone wearing an Emacs tee-shirt, you’re going to assume that they probably use Emacs, and asking them about their dot-emacs file is going to seem like a better casual conversation-starter compared to the base rate of people wearing non-Emacs shirts. Not with certainty—maybe they just found the shirt in a thrift store and thought it looked cool—but the shirt shifts the probabilities implied by your decisionmaking.
The problem that Bayesian reasoning poses for naïve egalitarian moral intuitions, is that, as far as I can tell, there’s no philosophically principled reason for “probabilistic update about someone’s psychology on the evidence that they’re wearing an Emacs shirt” to be treated fundamentally differently from “probabilistic update about someone’s psychology on the evidence that she’s female”. These are of course different questions, but to a Bayesian reasoner (an inhuman mathematical abstraction for getting the right answer and nothing else), they’re the same kind of question: the correct update to make is an empirical matter that depends on the actual distribution of psychological traits among Emacs-shirt-wearers and among women. (In the possible world where most people wear tee-shirts from the thrift store that looked cool without knowing what they mean, the “Emacs shirt → Emacs user” inference would usually be wrong.) But to a naïve egalitarian, judging someone on their expressed affinity for Emacs is good, but judging someone on their sex is bad and wrong.
I used to be a naïve egalitarian. I was very passionate about it. I was eighteen years old. I am—again—still fond of the moral sentiment, and eager to renormalize it into something that makes sense. (Some egalitarian anxieties do translate perfectly well into the Bayesian setting, as I’ll explain in a moment.) But the abject horror I felt at eighteen at the mere suggestion of making generalizations about people just—doesn’t make sense. It’s not even that it shouldn’t be practiced (it’s not that my heart wasn’t in the right place), but that it can’t be practiced—that the people who think they’re practicing it are just confused about how their own minds work.
Give people photographs of various women and men and ask them to judge how tall the people in the photos are, as Nelson et al. 1990 did, and people’s guesses reflect both the photo-subjects’ actual heights, but also (to a lesser degree) their sex. Unless you expect people to be perfect at assessing height from photographs (when they don’t know how far away the cameraperson was standing, aren’t “trigonometrically omniscient”, &c.), this behavior is just correct: men really are taller than women on average, so P(true-height|apparent-height, sex) ≠ P(height|apparent-height) because of regression to the mean (and women and men regress to different means). But this all happens subconsciously: in the same study, when the authors tried height-matching the photographs (for every photo of a woman of a given height, there was another photo in the set of a man of the same height) and telling the participants about the height-matching and offering a cash reward to the best height-judge, more than half of the stereotyping effect remained. It would seem that people can’t consciously readjust their learned priors in reaction to verbal instructions pertaining to an artificial context.
Once you understand at a technical level that probabilistic reasoning about demographic features is both epistemically justified, and implicitly implemented as part of the way your brain processes information anyway, then a moral theory that forbids this starts to look less compelling? Of course, statistical discrimination on demographic features is only epistemically justified to exactly the extent that it helps get the right answer. Renormalized-egalitarians can still be properly outraged about the monstrous tragedies where I have moral property P but I can’t prove it to you, so you instead guess incorrectly that I don’t just because other people who look like me mostly don’t, and you don’t have any better information to go on—or tragedies in which a feedback loop between predictions and social norms creates or amplifies group differences that wouldn’t exist under some other social equilibrium.
Nelson et al. also found that when the people in the photographs were pictured sitting down, then judgments of height depended much more on sex than when the photo-subjects were standing. This too makes Bayesian sense: if it’s harder to tell how tall an individual is when they’re sitting down, you rely more on your demographic prior. In order to reduce injustice to people who are an outlier for their group, one could argue that there is a moral imperative to seek out interventions to get more fine-grained information about individuals, so that we don’t need to rely on the coarse, vague information embodied in demographic stereotypes. The moral spirit of egalitarian–individualism mostly survives in our efforts to hug the query and get specific information with which to discriminate amongst individuals. (And discriminate—to distinguish, to make distinctions—is the correct word.) If you care about someone’s height, it is better to precisely measure it using a meterstick than to just look at them standing up, and it is better to look at them standing up than to look at them sitting down. If you care about someone’s skills as potential employee, it is better to give them a work-sample test that assesses the specific skills that you’re interested in, than it is to rely on a general IQ test, and it’s far better to use an IQ test than to use mere stereotypes. If our means of measuring individuals aren’t reliable or cheap enough, such that we still end up using prior information from immutable demographic categories, that’s a problem of grave moral seriousness—but in light of the mathematical laws governing reasoning under uncertainty, it’s a problem that realistically needs to be solved with better tests and better signals, not by pretending not to have a prior.
This could take the form of finer-grained stereotypes. If someone says of me, “Taylor Saotome-Westlake? Oh, he’s a man, you know what they’re like,” I would be offended—I mean, I would if I still believed that getting offended ever helps with anything. (It never helps.) I’m not like typical men, and I don’t want to be confused with them. But if someone says, “Taylor Saotome-Westlake? Oh, he’s one of those IQ 130, mid-to-low Conscientiousness and Agreeableness, high Openness, left-libertarian American Jewish atheist autogynephilic male computer programmers; you know what they’re like,” my response is to nod and say, “Yeah, pretty much.” I’m not exactly like the others, but I don’t mind being confused with them.
The other place where I think Murray is hiding the ball (even from himself) is in the section on “reconstructing a moral vocabulary for discussing human differences.” (I agree that this is a very important project!) Murray writes—
I think at the root [of the reluctance to discuss immutable human differences] is the new upper class’s conflation of intellectual ability and the professions it enables with human worth. Few admit it, of course. But the evolving zeitgeist of the new upper class has led to a misbegotten hierarchy whereby being a surgeon is better in some sense of human worth than being an insurance salesman, being an executive in a high-tech firm is better than being a housewife, and a neighborhood of people with advanced degrees is better than a neighborhood of high-school graduates. To put it so baldly makes it obvious how senseless it is. There shouldn’t be any relationship between these things and human worth.
I take strong issue with Murray’s specific examples here—as an incredibly bitter autodidact, I care not at all for formal school degrees, and as my fellow nobody pseudonymous blogger Harold Lee points out, many of those stuck in the technology rat race aspire to escape to a more domestic- and community-focused life not unlike that of a housewife. But after quibbling with the specific illustrations, I think I’m just going to bite the bullet here?
Yes, intellectual ability is a component of human worth! Maybe that’s putting it baldly, but I think the alternative is obviously senseless. The fact that I have the ability and motivation to (for example, among many other things I do) write this cool science–philosophy blog about my delusional paraphilia where I do things like summarize and critique the new Charles Murray book, is a big part of what makes my life valuable—both to me, and to the people who interact with me. If I were to catch COVID-19 next month and lose 40 IQ points due to oxygen-deprivation-induced brain damage and not be able to write blog posts like this one anymore, that would be extremely terrible for me—it would make my life less-worth-living. (And this kind of judgment is reflected in health and economic policymaking in the form of quality-adjusted life years.) And my friends who love me, love me not as an irreplaceably-unique-but-otherwise-featureless atom of person-ness, but because my specific array of cognitive repertoires makes me a specific person who provides a specific kind of company. There can’t be such a thing as literally unconditional love, because to love someone in particular, implicitly imposes a condition: you’re only committed to love those configurations of matter that constitute an implementation of your beloved, rather than someone or something else.
Murray continues—
The conflation of intellectual ability with human worth helps to explain the new upper class’s insistence that inequalities of intellectual ability must be the product of environmental disadvantage. Many people with high IQs really do feel sorry for people with low IQs. If the environment is to blame, then those unfortunates can be helped, and that makes people who want to help them feel good. If genes are to blame, it makes people who want to help them feel bad. People prefer feeling good to feeling bad, so they engage in confirmation bias when it comes to the evidence about the causes of human differences.
I agree with Murray that this kind of psychology explains a lot of the resistance to hereditarian explanations. But as long as we’re accusing people of motivated reasoning, I think Murray’s solution is engaging in a similar kind of denial, but just putting it in a different place. The idea that people are unequal in ways that matter is legitimately too horrifying to contemplate, so liberals deny the inequality, and conservatives deny that it matters. But I think if you really understand the fact–value distinction and see that the naturalistic fallacy is, in fact, a fallacy (and not even a tempting one), that the progress of humankind has consisted of using our wits to impose our will on an indifferent universe, then the very concept of “too horrifying to contemplate” becomes a grave error. The map is not the territory: contemplating doesn’t make things worse; not-contemplating that which is already there can’t make things better—and can blind you to opportunities to make things better.
Recently, Richard Dawkins spurred a lot of criticism on social media for pointing out that selective breeding would work on humans (that is, succeed at increasing the value of the traits selected for in subsequent generations), for the same reasons it works on domesticated nonhuman animals—while stressing, of course, that he deplores the idea: it’s just that our moral commitments can’t constrain the facts. Intellectuals with the reading-comprehension skill, including Murray, leapt to defend Dawkins and concur on both points—that eugenics would work, and that it would obviously be terribly immoral. And yet no one seems to bother explaining or arguing why it would be immoral. Yes, obviously murdering and sterilizing people is bad. But if the human race is to continue and people are going to have children anyway, those children are going to be born with some distribution of genotypes. There are probably going to be human decisions that do not involve murdering and sterilizing people that would affect that distribution—perhaps involving selection of in vitro fertilized embryos. If the distribution of genotypes were to change in a way that made the next generation grow up happier, and healthier, and smarter, that would be good for those children, and it wouldn’t hurt anyone else! Life is not a zero-sum game! This is pretty obvious, really? But if no one except nobody pseudonymous bloggers can even say it, how are we to start the work?
The author of the Xenosystems blog mischievously posits five stages of knowledge of human biodiversity (in analogy to the famous, albeit reportedly lacking in empirical support, five-stage Kübler-Ross model of grief), culminating in Stage 4: Depression (“Who could possibly have imagined that reality was so evil?”) and Stage 5: Acceptance (“Blank slate liberalism really has been a mountain of dishonest garbage, hasn’t it? Guess it’s time for it to die …”).
I think I got stuck halfway between Stage 4 and 5? It can simultaneously be the case that reality is evil, and that blank slate liberalism contains a mountain of dishonest garbage. That doesn’t mean the whole thing is garbage. You can’t brainwash a human with random bits; they need to be specific bits with something good in them. I would still be with the program, except that the current coordination equilibrium is really not working out for me. So it is with respect for the good works enabled by the anti-oppression Schelling point belief, that I set my sights on reorganizing at the other Schelling point of just tell the goddamned truth—not in spite of the consequences, but because of the consequences of what good people can do when we’re fully informed. Each of us in her own way.