Our System Often Rewards BS Rather Than Truth

Columbia University statistics professor Andrew Gelman blogs (here is my collection of his related posts):

Last year we discussed the problem of scientists who host podcasts in which they credulously and uncritically interview celebrity scientists who are promoting junk science. There was Sean Carroll, a physicist who should know better, fawning over a Ellen Langer, Harvard psychology professor who was making wild claims about mind-body healing and also uncritically pushing the ridiculous claims by Robert Sapolsky, a Stanford biology professor who’s notorious for relying on bogus science.

Both these academic science superstars–the one from Harvard and the one from Stanford–have also been featured uncritically on the Freakonomics podcasts.

As I wrote a few months ago, If you’re a well-trained physicist or economist and you have a public platform and you use it to promote junk science . . . really, what’s the point of it all?

I mean, really, what’s the point? I can think of three reasons:

1. You’re invested in the scientist-as-hero narrative (which I hate), and these people are NPR and Ted-certified heroes with great stories to tell.

One reason why these celebrity scientists have such great stories to tell is that they’re not bound by the rules of evidence. Unlike you or me, they’re willing to make strong scientific claims that aren’t backed up by data.

So it’s not just that Sapolsky and Langer are compelling figures with great stories who just happen to be sloppy with the evidence. It’s more that they are compelling figures with great stories in large part because they are willing to be sloppy with the evidence.

2. Once you have a podcast, you want more listeners. (I have a blog here, I get it.) You get more listeners with good stories. The truth or evidence of the stories is not so important.

3. You outsource your judgment to the academic community, peer-review process, NPR, Ted, and other podcasts. If someone’s a decorated professor at a top university, with papers published in top journals, further validated by top-grade publicity, then it’s gotta be solid research, right? These science-podcasters are too busy to actually look into the evidence that purportedly supports the wild claims they’re promoting.

The question then is, what to do about it?

My original thought was that, if you’re gonna interview people who make outrageous-but-wow-it-would-be-amazing-if-true claims, you should grill them a bit. Express some skepticism and don’t let them just wave away objections.

The trouble is that if you do this your interview would not go well. If you had me on a podcast and asked me tough questions passed along by skeptics who don’t trust Bayesian inference or don’t like polling or whatever, that’s fine–I can respond to such things. That would be fine. But if you push hard against people who have the habit of stretching the evidence, I don’t know what would happen. I’m pretty sure they wouldn’t just collapse and admit that their claims are unsupported. My guess is that they’d refer to other studies that they claim would back them up, to which the podcast host would be able to instantaneously respond. So it would just push things back one more step. Either a waste of time or a disaster if the person being interviewed gets angry.

So I don’t think the strategy of pushing harder in the interview would work.

I’ve listened to lots of podcasts, and I’ve never heard a single one in which the interviewers really challenge the people being interviewed.

Decoding the Gurus are constantly praising Sean Carroll.

Grok:

This is a textbook example of how ignoring design effects in clustered data can inflate confidence. In psych, multilevel data is common (repeated measures, raters), but it’s tricky—easy to get “significant” results from correlated errors. Gelman and Brown emphasize comparing complex models to simple ones (like paired differences) as a gut check, which is solid advice. If the fancy model says one thing and the basic average says another, something’s off….

Low-power studies + selection bias + hype = unreplicable “wow” effects. Gelman calls it “debasing the currency of discourse”—crowding out real science. He’s right that big-if-true claims (e.g., mindset heals wounds) need extraordinary evidence, but these often rest on p-hacking or misanalysis. The piranha theorem (can’t have tons of large, independent effects coexisting) is invoked, which makes sense mathematically: If every subtle mindset tweak had huge physical impacts, the world would look chaotic.

Podcasters chase stories, not rigor. Incentives favor “mind-blowing” over mundane. Gelman notes Clarke’s Law (“sufficiently crappy research is indistinguishable from fraud”)—harsh, but when profs like Sapolsky blame journalists for their own exaggerations, it stings.

Why This Persists: Small n, speculative theories, no pre-reg, plus incentives for big claims. Gelman argues for humility—e.g., frame as “high-risk, high-reward” instead of proven.

ChatGPT says: The recent spate of controversies around “mind-body” healing studies provides a sharp case study in what Stephen Turner has long argued: that modern science is not just a neutral search for truth but a deeply social enterprise, shaped by institutions, prestige, and professional incentives. When Ellen Langer’s group publishes papers claiming that perceptions of time can alter physical healing, or when Robert Sapolsky casually repeats a fabricated “6000 calories a day” chess-player myth, the problem is not just individual sloppiness—it’s a structural feature of contemporary science.

As Gelman and Brown show in their forensic re-analysis of the Langer/Aungle cupping study, what looked like “statistically significant” findings collapse under scrutiny. The effect sizes wobble, the t-statistics vanish once multilevel variation is properly accounted for, and the literature Langer cites is filled with thinly replicated, often misinterpreted studies. In short: the evidence is not there. Yet the study was published in a legitimate journal, promoted by Harvard, and amplified in popular media.

Turner’s perspective is especially relevant here. He has long emphasized that what sustains “scientific authority” is not necessarily the reliability of results but the collective trust placed in institutional science. The Langer episode illustrates this clearly: Harvard’s brand, Freakonomics’ enthusiasm, Carroll’s credulous interviewing—all act as amplifiers. The prestige system rewards big, counterintuitive claims (“you can heal faster if you believe time is different”), while the hard, careful work of methodological critique is marginalized. Science becomes, as Turner might put it, a performance of authority rather than a transparent process of inquiry.

This performance is sustained by what Turner calls “the problem of expertise.” In areas where laypeople (and even most scientists) cannot independently evaluate the claims, deference to authority is the default. A physicist like Sean Carroll can interview Langer or Sapolsky without pushing back, because to do so would require specialized statistical knowledge he doesn’t have—or the willingness to publicly puncture the myth of “celebrity science.” In practice, Carroll’s role is not as skeptic but as conduit. The audience hears “this is very data-based”, and the institutional trust carries the claim forward.

Here we see the corrosive effect Turner warns about: when “scientific knowledge” becomes unmoored from rigorous replication and rests instead on networks of authority, prestige, and trust, it functions less like knowledge and more like rhetoric. Junk science is not an anomaly but an outcome of the very structures that elevate “big thinkers” and “genius professors.”

The real damage, then, is not just the spread of dubious healing claims. It’s the erosion of credibility: the debasing of what Gelman calls “the currency of discourse”. If Harvard and Stanford promote weak or fabricated findings, if journals let methodological errors slip through, if popularizers treat anecdotes as evidence, why should the public—or other scientists—trust these institutions at all?

Turner would argue that this points to a broader crisis: we cannot rely on “science as an institution” to guarantee truth. The replication crisis makes clear that institutional filters fail. What’s needed instead is a shift in epistemic culture: away from celebrity amplification, away from spectacular claims, and toward transparency, replication, and humility. Gelman and Brown’s recommendations—replication, preregistration, realistic effect sizes—are steps in that direction, but Turner would add that we must also confront the sociology of authority itself.

Science cannot escape its social foundations. But if its institutions reward bullshit over rigor, amplification over skepticism, then we shouldn’t be surprised when “healing by mindset” becomes the next frontier of Harvard-backed pseudoscience. The lesson here, seen through Turner’s lens, is stark: the problem is not just bad studies, but the social machinery that makes bad studies authoritative.

The fiascos around Ellen Langer’s “mind-body healing” studies and Robert Sapolsky’s flamboyant calorie myths aren’t just examples of sloppy science. They expose the deeper pathology Stephen Turner warns against: scientism, the belief that the authority of science extends wherever its rhetoric can reach, regardless of whether the evidence justifies it.

Scientism thrives on prestige. Harvard or Stanford faculty publish bold, counterintuitive claims—time perception alters healing, chess burns 6000 calories a day—and because the institutions carry weight, the claims are repeated by Freakonomics, NPR, or celebrity podcasters. The underlying evidence collapses on inspection—Gelman and Brown’s re-analysis shows the Langer study’s supposed large effects dissolve into noise once proper modeling is applied. But the structure of scientism ensures that the claim has already done its work: it entered the bloodstream of “what science says.”

Stephen Turner’s critique is that scientism mistakes institutional authority for epistemic warrant. Science, properly understood, is a fragile process of inquiry, full of error and revision. But scientism repackages that messy process into pronouncements delivered with the aura of certainty. It collapses the distinction between “we have data suggesting X” and “science shows X.” When Carroll nods along to Langer’s claims with “Oh yeah”, he’s not just being a bad interviewer; he’s enacting scientism—affirming that the authority of a Harvard psychologist is enough to settle the matter.

The damage is twofold. First, scientism encourages bullshit. As Gelman notes, the incentives tilt toward big claims with shaky evidence: that’s what gets you TED talks, book deals, and journalistic fawning. A cautious, modest statement—“chess players may experience stress responses, but caloric expenditure remains unclear”—would never be amplified. Second, scientism corrodes trust. When audiences discover that the grand claims are hollow, they don’t just doubt the celebrity professor; they doubt science itself. The replication crisis shows that this is not paranoia but pattern.

Scientism, then, is not an overextension of science but a betrayal of it. It treats science as an oracle rather than as inquiry. It thrives on authority, not replication; on spectacle, not method. Turner’s point is that this is not an accident but a structural feature of how modern institutions traffic in “expertise.” The Harvard name, the physicist interviewer, the popular podcast—these are mechanisms for manufacturing belief, not for scrutinizing truth.

The way forward is not to double down on scientism—more hype, more trust, more “science communication” that oversells the weak evidence. It is to accept that science is fallible, limited, and social. To speak honestly about uncertainty. To separate the prestige of institutions from the credibility of specific claims. To refuse the conflation of authority with knowledge.

In short: the lesson of Langer and Sapolsky is not that “science sometimes fails,” but that scientism always fails. It mistakes the theater of authority for the substance of inquiry. Turner’s warning is that unless we confront this, we’ll continue to be awash in Harvard-endorsed healing myths and Stanford-fueled calorie fantasies, while the public’s trust—rightly—evaporates.

The modern scientific enterprise is no longer an archipelago of individual investigators but an institutional complex—journals, universities, funding agencies, and media intermediaries—that generates consensus and distributes credibility. Within such a system, the actual warrant for belief is not the replicability or robustness of results, but the prestige of their institutional carriers. That Harvard psychology faculty or a Stanford biologist have said something is enough to constitute, in practice, what “science says.”

This is why a study whose statistical significance evaporates once random effects are modeled correctly, or an anecdotal claim conjured out of numerological error, can nevertheless circulate as fact. Scientism fuses science’s epistemic authority with the social authority of the institutions that speak in its name. The audience cannot, in most cases, assess the methodological details—whether the Langer cupping study’s effect sizes are plausible, or whether Sapolsky’s calorie arithmetic is nonsense. They can only register that “science has spoken.”

Turner’s analysis is sharper than the familiar lament that “science sometimes gets things wrong.” His point is that scientism structurally guarantees such wrongness will be amplified. The economy of prestige rewards spectacular, counterintuitive claims that dramatize the power of mind over body, or the stress of chess as equivalent to elite sport. The institutional and media circuits—journals, TED, Freakonomics, Carroll’s podcast—operate as multipliers of these claims. What matters is not validity but communicability: the production of what Turner calls the performance of expertise.

This helps explain why skepticism—Gelman and Brown’s meticulous statistical forensics, or critics pointing out the innumeracy of Sapolsky’s calorie math—remains marginal. Their work lacks the institutional glamour and audience. In scientism, critique has no traction because credibility is not adjudicated by the canons of method but by the distribution of authority. To question the prestige-backed claim is to question the institution itself, which the public, journalists, and even scientists in adjacent fields are reluctant to do.

The result is not merely error but epistemic corruption. Scientism substitutes the aura of certainty for the actual practice of inquiry. It repackages speculative hypotheses as settled science, collapses the distinction between anecdote and evidence, and enacts a vision of science as a secular priesthood—its authority grounded not in the contingency of replication and revision but in the charisma of its institutions and celebrities.

Turner’s warning, then, is not simply methodological but sociological. The replication crisis demonstrates that even under conditions of good faith, science produces unreliable results at scale. Scientism masks this reality, insisting that institutionalized science is coextensive with truth. The danger is not only that particular claims fail but that the entire currency of epistemic trust is debased.

To critique scientism, therefore, is to decouple science as practice from science as authority. It is to insist that Harvard or Stanford imprimatur, journal placement, or celebrity endorsement cannot substitute for the arduous, often negative labor of replication and falsification. It is to recognize that the sociology of expertise—the incentives, the prestige hierarchies, the media amplification—systematically biases what becomes “scientific knowledge.” And it is to see that unless this distinction is restored, we are condemned to cycles of enthusiasm, disillusionment, and cynicism.

Scientism promises certainty where there is only conjecture, authority where there should be doubt. Its collapse is not an aberration but an inevitability. Turner’s contribution is to remind us that what needs reform is not merely statistical practice or publication norms but the very social machinery of scientific authority.

About Luke Ford

I've written five books (see Amazon.com). My work has been covered in the New York Times, the Los Angeles Times, and on 60 Minutes. I teach Alexander Technique in Beverly Hills (Alexander90210.com).
This entry was posted in Science, Stephen Turner. Bookmark the permalink.