Andrew Gelman writes:
A bunch of people pointed me to a New York Times article by Susan Dominus about Amy Cuddy, the psychology researcher and Ted-talk star famous for the following claim (made in a paper written with Dana Carney and Andy Yap and published in 2010):
That a person can, by assuming two simple 1-min poses, embody power and instantly become more powerful has real-world, actionable implications.
Awkwardly enough, no support for that particular high-stakes claim was ever presented in the journal article where it appeared. And, even more awkwardly, key specific claims for which the paper did offer some empirical evidence for, failed to show up in a series of external replication studies, first by Ranehill et al. in 2015 and then more recently various other research teams (see, for example, here). Following up on the Ranehill et al. paper was an analysis by Joe Simmons and Uri Simonsohn explaining how Carney, Cuddy, and Yap could’ve gotten it wrong in the first place. Also awkward was a full retraction by first author Dana Carney, who detailed many ways in which the data were handled in order to pull out apparently statistically significant findings.
Anyway, that’s all background. I think Dominus’s article is fair, given the inevitable space limitations. I wouldn’t’ve chosen to have written an article about Amy Cuddy—I think Eva Ranehill or Uri Simonsohn would be much more interesting subjects. But, conditional on the article being written largely from Cuddy’s perspective, I think it portrays the rest of us in a reasonable way. As I said to Dominus when she interviewed me, I don’t have any personal animosity toward Cuddy. I just think it’s too bad that the Carney/Cuddy/Yap paper got all that publicity and that Cuddy got herself tangled up in defending it. It’s admirable that Carney just walked away from it all. And it’s probably a good call of Yap to pretty much have avoided any further involvement in the matter.
The only thing that really bugged me about the NYT article is when Cuddy is quoted as saying, “Why not help social psychologists instead of attacking them on your blog?” and there is no quoted response from me. I remember this came up when Dominus interviewed me for the story, and I responded right away that I have helped social psychologists! A lot. I’ve given many talks during the past few years to psychology departments and at professional meetings, and I’ve published several papers in psychology and related fields on how to do better applied research, for example here, here, here, here, here, here, here, and here. I even wrote an article, with Hilda Geurts, for The Clinical Neuropsychologist! So, yeah, I do spend some time helping social psychologists.
Dominus also writes, “Gelman considers himself someone who is doing others the favor of pointing out their errors, a service for which he would be grateful, he says.” This too is accurate, and let me also emphasize that this is a service for which I not only would be grateful. I actually am grateful when people point out my errors. It’s happened several times; see for example here. When we do science, we can make mistakes. That’s fine. What’s important is to learn from our mistakes.
In summary, I think Dominus’s article was fair, but I do wish she hadn’t let that particular false implication by Cuddy, the claim that I didn’t help social psychologists, go unchallenged. Then again, I also don’t like it that Cuddy baselessly attacked the work of Simmons and Simonsohn and to my knowledge never has apologized for that. (I’m thinking of Cuddy’s statement, quoted here, that Simmons and Simonsohn “are flat-out wrong. Their analyses are riddled with mistakes . . .” I never saw Cuddy present any evidence for these claims.)
* Steve Sailer: Why has social psychology been the central front in the Replication Crisis?
I think this is partly because social psychology, as social psychologist Jonathan Haidt has documented, is extremely politicized. On the other hand, it is also because social psychologists are scientific enough to care. Other fields are at least as distorted, but they don’t feel as bad about it as the psychologists do. (At the extreme, cultural anthropologists have turned against science in general: at Stanford, for example, the Anthropology Department broke up for a number of years into Cultural Anthropology and Anthropological Sciences.)
Is the social psychology glass therefore half empty or half full? I’d say it’s to the credit of social psychologists that they feel guilty enough to host these debates rather than to just ignore them.
* Andrew Gelman: – Psychology is a relatively open and uncompetitive field (compared for example to biology). Many researchers will share their data.
– Psychology is low budget (compared to biomedicine). So, again, not so much incentive to hoard data or lab procedures. There’s no “Robert Gallo” in psychology who would steal someone’s virus sample in order to get a Nobel Prize.
– The financial rewards are lower within psychology, hence the incentive is not to set up your own company using secret technology but rather to get your idea known far and wide so you can get speaking tours, book contracts, etc. Sure, most research psychologists don’t attempt this, but to the extent there are financial rewards, that’s where they are.
– In psychology, data are generally not proprietary (as in business) or protected (as in medicine). So there’s a norm of sharing. In bio, if you want someone’s data, you have to beg. In psychology, they have to give you a reason not to share.
– In psychology, experiments are easy to replicate (unlike econ or poli sci, where you can’t just run a bunch more recessions or elections) and cheap to replicate (unlike medicine which involves doctors and patients). So replication is a live option, indeed it gets people suggesting that preregistered replication be a requirement in some cases.
– Finally, hypotheses in psychology, especially social psychology, are often vague, and data are noisy. Indeed, there often seems to be a tradition of casual measurement, the idea perhaps being that it doesn’t matter exactly what you measure because if you get statistical significance, you’ve discovered something. This is different from econ where it seems there’s more of a tradition of large datasets, careful measurements, and theory-based hypotheses. Anyway, psychology studies often (not always, but often) feature weak theory + weak measurement, which is a recipe for unreplicable findings.
To put it another way, p-hacking is not the cause of the problem; p-hacking is a symptom. Researchers don’t want to p-hack; they’d prefer to confirm their original hypotheses. They p-hack only because they have to…
Regarding the issue of why I never contacted Cuddy directly: On the occasions that I have contacted people directly when there have been big problems with their work, I typically have not found such interactions to be useful. Sometimes people don’t respond, other times they seem to miss the point. I do agree that there’s the potential to learn from such a conversation—but there’s also the potential to learn by posting on the blog and getting comments from anyone in the world who might have interest or expertise in the problem.
What it comes down to, I think, is that there are different styles of interaction. Given that I’ve been blogging daily for over ten years, it’s no surprise that I find blogging to be a useful way of learning from and interacting with people. One reason I started blogging is that it seemed more useful to converse with thousands of people at once, rather than exchanging with people one on one. For some purposes, though, email can be better, and in retrospect maybe this would’ve been one such case. I’m not so sure that Cuddy thinks so, though, given that she never emailed me either.
Regarding “the idea of trying to persuade her, in person”: Given that she hadn’t been persuaded by the direct evidence of Ranehill et al., and she hadn’t been persuaded by the very clear arguments of Simmons and Simonsohn, I didn’t (and don’t) see any reason she’d be persuaded by me! After all, I wasn’t really offering any new arguments; my contribution, such as it was, in my blog posts and Slate article (coauthored with Kaiser Fung) was to report the Ranehill et al. and Simmons and Simonsohn articles and add some perspective. So I’m not really sure how the conversation would’ve gone, given that Cuddy had already seen those things and was unpersuaded.
Just in general I find it easier, and maybe more productive, to present my perspective, address arguments that come in, and consider how I can learn. Direct persuasion rarely works and is stressful, which I guess is what I mean when I said I don’t like interpersonal conflict. Anyway, each of us has our own style of interaction. Here I am responding to blog comments at 5 in the morning, something I don’t usually do!
* I was also bothered by an implicit claim in Dominus’ article that one has the right to demand their favorite channel of communication.
* I think that the NYT article made several insinuations that were unjustified, namely that Cuddy facing brutal criticism is driving women out of the field etc. The NYT writer seems to somehow downplay how much Cuddy has benefited from a poorly written piece of science, nor does she point out that Cuddy’s protestations are very much in her own self interest.
But what’s worse about the article is how it ignores how Cuddy’s actions affect other people in the social sciences. Someone like Cuddy beat out many people–including many women–to receive her tenured job at a top school. That position comes with responsibility, and Cuddy would apparently like to have all the benefits without all of the ensuing challenges. She eagerly took on a public role–no one forced her to do a TED talk–and then became dismayed when she also faced public criticism over it. But it’s all the more appalling considering how people like Cuddy have used shoddy research practices to advance their own careers at the expense of science: that makes it more difficult for those without inside connections to be able to do research and have others pay attention.