Harvard psychologist Jerome Kagan said: “…(ADHD) is an invention. Every child who’s not doing well in school is sent to see a pediatrician, and the pediatrician says: “It’s ADHD; here’s Ritalin.” In fact, 90 percent of these 5.4 million (ADHD-diagnosed) kids don’t have an abnormal dopamine metabolism. The problem is, if a drug is available to doctors, they’ll make the corresponding diagnosis.”
Stephen Turner writes a 2019 book review of the 2018 book Expert Failure by economist Roger Koppl:
* Education reform has been on the public agenda for more than a century. Educational research, as Ellen Condliffe Lagemann has shown, has been a succession of fads (Lagemann 2000). This gap never closed.
* Normal academic research, research not driven by a willing buyer with a policy agenda, is not exempt from these
problems. As Richard Horton, editor-in-chief of The Lancet, writes, “[M]uch of the scientific literature, perhaps half, may simply be untrue. Afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance, science has taken a turn towards darkness.” (2015, p. 1380)
Horton adds a comment about markets: “Can bad scientific practices be fixed?” Not without changing the market. “Part of the problem is that no one is incentivised to be right. Instead, scientists are incentivised to be productive and innovative” (2015, p. 1380).
One facilitator of this turn to darkness has been the abuse of statistics, acknowledged by the American Statistical Association (Wasserstein and Lazar 2016-and publicized in recent discussions of p-hacking and in connection with the reproducibility crisis. The issues are very basic. P values are conventionally used to certify a research finding as a fact. This convention, and its abuse, is a major source of the reproducibility crisis in psychology. A recent suggestion (Benjamin et al. 2018) to raise the level of significance from 0.05 to 0.005 would cause whole fields to come close to disappearing — and this would certainly include the fields of evidence-based policy. And the p-value issue just scratches the surface of the problems, which extend to virtually every area in which statistics are used, and in which the small manipulation of assumptions can produce radically different results.
One such problem is this: research subjects and goals are not randomly distributed. People are looking for and attempting to establish particular results. As John Iaonnidis has pointed out, the effect of this is to make the expert consensus little more than a measure of bias (2005). And obviously this bias is often politically motivated bias. The existence of this kind of bias, which often occurs when topics are intentionally under-researched, is admitted even by Brookings, whose reputation for impartiality is itself questionable.
“Psychologists, sociologists, and educational researchers have devoted far less attention to the black-white test score gap over the past quarter-century than they should have. Cowed by the hostile reaction to Daniel Patrick Moynihan’s 1965 report on the status of the black family and to Arthur Jensen’s 1969 article arguing that racial differences in test performance were likely to be partly innate, most social scientists have chosen safer topics and hoped the problem would go away.” (Jencks and Phillips 1998)
There are many other topics that are no-go zones. And there is even a philosophical literature defending the practice of avoiding research on topics that lead in the wrong political direction (Kitcher, 2000, pp. 193-97). This kind of politically motivated self-censorship more or less assures that there will be massive error.
* “Error” is a problematic notion in this context, because judgments about error also rests, so to speak, on turtles that go all the way down. There is no perch outside of opinion on which we can rest our judgments. It is, as Michael Oakeshott would say, platforms, that go all the way down (1975, pp. 9, 27, 34). Our beliefs about the world rest on research that relies on experimental and statistical conventions. These in turn rest on other opinions, other consensuses. What we take to be true about the world depends on what someone decided to fund. The science, and the expertise we have, is the product of “the world,” but it is the world as disclosed by past decisions to disclose it and disclose it in a particular way. The “ways” are necessarily limited in ways that are unknown to us. The path we took could have been different. And had we taken a different path, we might have been in a position to see what the limitations of the path we took were. If we did not invest in that path, we might not ever be in that position. It is pleasing to think that the truth will out, eventually. But turtles can live a long time. And science is as entangled in problematic decision processes as the state.