Nathan Cofnas writes for Quillette:
Last year Robert Bowers shot up a synagogue in Pittsburgh, killing eleven people. Before committing this atrocity he wrote on Gab: “HIAS [Hebrew Immigrant Aid Society] likes to bring invaders in that kill our people. I can’t sit by and watch my people get slaughtered. Screw your optics, I’m going in.”
Gab is a Twitter alternative used by many neo-Nazis and alt-righters who have been (or know they would be) banned from actual Twitter. The unintended—but entirely predictable—consequence of throwing extremists off Twitter has been to create a large community of exiles on Gab. In Gabland, it is people who question Jewish conspiracy theories or the idea that the US should be a white ethnostate who are considered “trolls.” A similar community is developing on the YouTube alternative BitChute, whose Alexa ranking is rising quickly.
Bowers’s threat of imminent violence (“Screw your optics, I’m going in”) didn’t alarm any of his fellow extremists on Gab. What if he had written the same thing on Twitter? Someone would have been much more likely to contact the police. Perhaps at that point there wouldn’t have been enough time to stop him anyway. But if he had been on Twitter, it’s possible that someone would have reported him to the police long before the shooting for some ominous statements he had made in the past. In any case, relegating Bowers to a non-mainstream platform didn’t stop him from committing the deadliest attack on Jews in US history.
In the last few weeks, the leading social media companies have doubled down on their strategy of deplatforming people and censoring content. Alt-right accounts are disappearing from Twitter, videos on controversial topics are being deleted from YouTube, and even some politically moderate YouTube streamers/content creators who didn’t violate the terms of service are being demonetized in an effort to drive them away. But deplatforming won’t work.
This claim needs clarification. Whether something “works” or not depends on what you’re trying to accomplish. If Twitter/YouTube/Facebook want to virtue signal by showing that they oppose controversial views (which could well be their true aim), then deplatforming controversial people will work. What I mean is that it won’t accomplish the noble goals that these companies say is motiving them: to prevent violence and the spread of socially destructive misinformation. If these are their goals then deplatforming will backfire—and already has backfired.
Advocates of deplatforming tend to think only one step ahead: Throw people with opinions you don’t like off mainstream social media and you won’t see them again—out of sight, out of mind. But the deplatformers should try thinking two, maybe even three, steps ahead: What will people do after they’re banned? How will their followers react? How will this be perceived by more or less neutral observers? With some forethought, it’s easy to see that banning people with supposedly “bad” or “wrong” views may not be the victory that deplatformers think it is.
Banning people from social media doesn’t make them change their minds. In fact, it makes them less likely to change their minds. It makes them more alienated from mainstream society, and, as noted, it drives them to create alternative communities where the views that got them banned are only reinforced.
Banning people for expressing controversial ideas also denies them the opportunity to be challenged. People with extremist or non-mainstream opinions are often written off as deranged monsters who could not possibly respond to rational argument. There are, of course, some neo-Nazis, Holocaust deniers, and the like, who conform to this cartoonish stereotype. With these people, reason and evidence go in one ear and come out the other. But not everyone outside the mainstream, and not everyone who falls for a misguided conspiracy theory, deserves to be written off. People do sometimes change their minds in response to reason. If they didn’t there would be no point in debating anything.