TechTech

The Problems With Internet Platforms Policing Hate

Tech companies are making headlines by cutting ties with white supremacists and Nazis. As gestures of human decency, these initiatives are heartening. But will they actually hobble hate groups?
Ringer illustration

“Literally, I woke up in a bad mood and decided someone shouldn’t be allowed on the internet,” Cloudflare CEO Matthew Prince wrote in an email to staffers last week. He had impulsively booted neo-Nazi site The Daily Stormer from Cloudflare’s service because the people behind the site were, in his words, “assholes.” The violence at the “Unite the Right” rally in Charlottesville provoked a moment of reckoning for Silicon Valley. Technology companies provided hate groups with the online spaces and organizational tools, and what happened in Charlottesville was the bone-chilling result of some of those groups taking advantage of those resources.

Cloudflare was not alone in its decision to limit the presence of hate groups on its services in the aftermath of white supremacist violence. Voice and chat app Discord, described as “the alt-right’s favorite chat app” by The New York Times, shuttered its popular altright.com server, a hotbed for racist chatter. Web hosting service Squarespace dropped high-profile “Unite the Right” speaker Richard Spencer. Facebook banned some white supremacist and racist groups, including Right Wing Death Squad, Genuine Donald Trump, and Vanguard America. Quartz is keeping a running tally of technology companies cutting ties with white supremacist groups and individuals, including Airbnb, Google, Uber, GoFundMe, PayPal, Apple, and Spotify. The list keeps growing.

Tech companies are not beholden to their users in the same way the state is beholden to its citizens. This is an obvious statement, but it bears repeating, because the First Amendment and free speech norms are often invoked when discussing online content moderation. “Terms of service are not constitutions. And for the most part, platforms don’t grant users positive rights,” legal scholar Kendra Albert said during a talk at the Berkman Klein Center for Internet & Technology last year. The corporation is in the pilot’s seat, and it can hit the eject button whenever it wants.

For a long time, Silicon Valley companies chose not to hit eject. In 2012, for example, Twitter’s Tony Wang called the company “the free speech wing of the free speech party,” which didn’t really make sense but was seen as a boast about Twitter’s agnostic stance on content. As Twitter gained a reputation as a “honeypot for assholes” and weathered criticism for allowing both widespread harassment and ISIS recruitment, its stance on moderating content evolved. Reddit was also famously hands-off in moderating its content, which meant it became an attractive digital space for communities sharing racist, xenophobic, pedophilic, and otherwise repulsive content. Eventually, Reddit banned some of its more offensive groups, including r/altright.

“We’ve been very encouraged by all these companies taking public stances against hate. We believe that these companies have a responsibility for the public interest and for the safety of their users. We’ve been encouraging many of these companies, like GoDaddy, to do this for some time,” Brittan Heller, the Anti-Defamation League’s director of technology and society, told me. “When these companies have guidelines that prohibit content, like neo-Nazi and white supremacist content, and then they don’t take action to enforce their terms of service, it suggests that they are tacitly condoning it.”

Last week’s moderation blitz seemed like Silicon Valley was finally taking the threat of right-wing domestic extremist groups as seriously as it has taken the online presence of ISIS—a final step away from the former live-and-let-blog ethos, and toward a more serious-minded attempt at eradicating homegrown hate groups. These gestures are morally satisfying, and they will help scatter white supremacist communities online. “It’s definitely going to have a long-term impact on them,” Southern Poverty Law Center analyst Keegan Hankes told me, estimating that it might take months for some of these groups to recover. “They had a pretty sophisticated ecosystem of right-wing content figured out and, basically overnight, it disappeared.”

This does not mean that these groups have no recourse. In the late 1980s, Belgium had a racist politician problem. A far-right party called Vlaams Blok promoted an anti-immigration platform that so repulsed its opponents that they banded together and formed a policy known as cordon sanitaire—they effectively quarantined Vlaams Blok by creating a coalition that pledged to exclude the party from the political system. It did not work. Despite the efforts, Vlaams Blok continued to gain popularity, because it was seen as “fighting against the establishment.” In Silicon Valley’s push to purge white supremacists, there’s an improvised digital cordon sanitaire going on, albeit an incomplete one, and as long as these hate groups find other corridors to spread their messages, denials of service will act only as roadblocks rather than as the end of the road.

The case of The Daily Stormer, though, illustrates how intensified rule enforcement from technology companies can hinder the dissemination of hate speech. The website was evicted from its www.thedailystormer.com address, first by GoDaddy, and then by Google, and then by a quick succession of lesser-known registrars like 101Domain and Namecheap, including Russian and Chinese registrars. YouTube and Facebook banned its accounts. The site is currently only available on a .onion address, which means you have to use the Tor network to access it. While this means it still has a web presence, its obscure location is likely to severely circumscribe its efforts at recruiting young, impressionable people to its cause.

“There’s a reason these guys get so incensed when they get kicked off platforms like Twitter and Facebook,” Hankes noted. “They know these are major portals to talk to people who aren’t already involved in the movement or might be susceptible to joining the movement.”


Making it harder for people to access neo-Nazi propaganda on the internet seems as uncontroversial as attempting to eliminate child pornography. And, from a moral standpoint, it is. These are little victories, and they are  worth celebrating. But they also require close scrutiny. Piecemeal crackdowns do not amount to a solution, and it is necessary to consider how difficult and dangerous it is to turn corporations into ad hoc morality police.

Prior to last week, Cloudflare had a long history of unapologetically allowing hate groups to use its services. As a ProPublica report from May 2017 detailed, Cloudflare provided this protective help to The Daily Stormer. The company defended its decision by citing its commitment to remaining neutral on the issue of clients’ content. That all changed after Charlottesville. Cloudflare’s Matthew Prince unilaterally decided to refuse service to the site, citing a clause in the company’s terms of service that allows it to refuse service at its sole discretion—a clause that many major platforms include. However, after calling the people behind The Daily Stormer “assholes,” Prince admitted something crucial in a separate blog post explaining his decision. “In a not-so-distant future, if we're not there already, it may be that if you're going to put content on the Internet you'll need to use a company with a giant network like Cloudflare, Google, Microsoft, Facebook, Amazon, or Alibaba,” he wrote. “Without a clear framework as a guide for content regulation, a small number of companies will largely determine what can and cannot be online.”

The Electronic Frontier Foundation released a statement which went even further than Prince’s, outlining the danger of allowing private corporations to act as arbiters of speech. “Any tactic used now to silence neo-Nazis will soon be used against others, including people whose opinions we agree with. Those on the left face calls to characterize the Black Lives Matter movement as a hate group. In the civil rights era cases that formed the basis of today’s protections of freedom of speech, the NAACP’s voice was the one attacked,” the EFF wrote, emphasizing that internet providers should have a transparent process in place for kicking Nazis and other hate groups off their platforms.

Richard Kirkendall, the CEO of Namecheap, explained his decision to refuse service to The Daily Stormer by noting that he made a judgment call that its contents amounted to speech that incites violence. “There is a line where free speech ends and incitement begins,” he wrote. “It may be an elusive one but, as United States Supreme Court Justice Potter Stewart stated in his threshold test for obscenity in Jacobellis v. Ohio: ‘I know it when I see it.’”

Kirkendall’s argument that The Daily Stormer contained speech that incited violence is a strong one. But he still came to that conclusion behind closed doors, using a personal moral criterion unavailable to the public. Abiding by the “I know it when I see it” moderation is a troublesome precedent to set. Asking Silicon Valley to act as the gatekeeper for public discourse is asking more than tech companies have historically proved themselves to be capable of. Technology companies have frequently made poor moderation choices in the past. A recent Reveal report detailed how activists of color have had their efforts blocked by Facebook for attempting to raise awareness of instances of hate speech. Facebook received blowback after it censored the “Napalm Girl,” a historic image from the Vietnam War.

As I detailed in an earlier piece on content moderation, earlier this year Twitter suspended the account of activist Alexandra Brodsky after she tweeted screenshots of anti-Semitic insults as a way to flag them as abusive. After the story made the media rounds, Twitter ended up reinstating Brodsky. But that didn’t seem to help the company decide how to differentiate between hate speech and speech drawing attention to hate. This remains true, and even if companies develop more nuanced methods for ferreting out hate speech, they will also need to be clear about how they apply those methods.

“We’ve been advocating not only for transparent terms of service but for transparent mechanisms of enforcement, and unless you have both of those together,” Heller said, “it may not be fully effective.”

“We’ve always recommended at the Center for Democracy and Technology that companies have very clear terms of service before they’re in a crisis moment, because it’s hard to articulate principles and a nuanced position in the heat of the moment,” Emma Llansó, the director for the Center for Democracy and Technology’s Free Expression project, told me. “But we also need to see a lot more transparency and accountability from companies about how they enforce these policies across the board.”

“There’s so much content moderation that happens and we have no view into it as users, and as the general public, so when we’re having public policy discussions around the power these companies have, we’re really operating somewhat in the dark,” Llansó said. “They are going to do things that are really destructive of free expression as well, and they need to have appeals processes in place to actively consider how their decision-making is affecting their users, and public discourse. That’s somewhere we see a lot of room for improvement in a lot of companies.”

The world will be a better place if technology companies are able to disrupt the spread of propaganda. But while their post-Charlottesville efforts are an encouraging sign that technology companies are finally treating the prospect of domestic right-wing extremist groups as a serious threat, the way these companies have chosen to address that threat is an unsettling reminder that they are near-unfettered gatekeepers of speech. We are online at the whim and for the profit of a few extremely wealthy multinational corporations with faulty track records for moderating content. As overdue and appreciated as their efforts to root out hate groups from the digital world are, their efforts to preserve an open internet should be undertaken with equal urgency.

Keep Exploring

Latest in Tech