
Tevin Eugene Crosby, Juan Ramon Guerrero, and Javier Jorge-Reyes died alongside 46 other people at Orlando’s Pulse Nightclub in June. The American gunman who killed them logged onto Facebook during his shooting spree. “Taste the Islamic State vengeance,” he wrote in a status update.
In December, the parents and siblings of Crosby, Guerrero, and Jorge-Reyes filed a federal lawsuit against Facebook, Google, and Twitter, alleging that the companies have “knowingly and recklessly” provided ISIS members with platforms, and that they have profited from this business of giving terrorists a place to speak. To win, the families will have to somehow get around a crucial law, Section 230 of the Communications Decency Act, which protects web companies from liability for what third parties publish. Web companies often invoke Section 230 when they are sued over what their users do. When Backpage.com faced lawsuits from women who claimed it allowed users to conduct human trafficking, Section 230 distanced the classifieds site from its pimp users. When Facebook was sued for failing to remove an anti-Semitic group quickly enough, Section 230 protected the company. Section 230 is integral to the health of the internet; without it, web companies could not operate, because they’d be vulnerable in a gamut of lawsuits anytime a user did anything illegal. But acknowledging the importance of the law is not the same as saying that internet companies never act recklessly or immorally. It’s saying it’s hard to punish them for it.
Since these companies are legally inoculated against responsibility for bad behavior by users, many have been reluctant to step into the fraught, politically tense role of deciding what is line-crossing, punishable, extremist speech and what is simply extreme. No matter how the Pulse lawsuit pans out, these social media companies will still face pressure to take responsibility for their roles in the spread of terrorist propaganda. Some have adapted effective bulwarks — for example, Twitter’s mass suspension of English-speaking ISIS-related accounts has led to a downswing in the organization’s presence on the platform. But the presence of terrorist propagandists and recruiters on these platforms is still a problem. As long as they host radical content that encourages and celebrates violence, there will be families like those of Crosby, Guerrero, and Jorge-Reyes, brokenhearted and prepared to fight.
Examples of the intersection of terrorism and social media abound. In 2016, a U.S. Department of Justice official claimed that most of the recruitment of young people by domestic terrorist groups happens through social media sites. The same summer as the Pulse killings, Facebook shut down a French terrorism suspect’s account — after he had already killed a police officer and his partner and livestreamed himself on Facebook Live from within her home. By the time Facebook acted, the video had already been ripped and disseminated by sympathizers, journalists, and rubberneckers. A Wired story from April describes in grim terms the competent grip ISIS has on social media. “The group’s closest peers are not just other terrorist organizations, then, but also the Western brands, marketing firms, and publishing outfits — from PepsiCo to BuzzFeed — who ply the internet with memes and messages in the hopes of connecting with customers,” Brendan I. Koerner wrote in “Why ISIS Is Winning the Social Media War.”
“The younger the companies are, the more likely they are to let content stay up, for a variety of reasons,” Seamus Hughes, the deputy director of George Washington University’s Program on Extremism, told me. “One is that they just don’t have the staff to do the kind of policing they need to.”
Hughes says companies often tout a libertarian, say-what-you-will approach to moderation — until it becomes apparent that this model attracts enough nastiness to poison a platform’s reputation. This is why, for instance, Twitter has changed its attitude toward moderating its users. It initially advertised itself as a sanctuary for the anything-goes internet, but then, after prominent users quit because of harassment and its status as a “honeypot for assholes” scared buyers away, the company overhauled its policies to take a more proactive and punitive approach to policing language.
While Twitter’s crackdown pushed some ISIS communications channels onto the messaging app Telegram, it didn’t eliminate the Islamic State’s presence. A win in this climate is mere reduction. And the Islamic State is not the only known terrorist group with internet access. “White supremacists post to social media, and studies now posit that mass killings are contagious,” said John Carlin, then the head of the Justice Department’s National Security Division, during a speech at George Washington University in 2015. Carlin emphasized that domestic terrorist groups are also radicalizing recruits on social networks. Terrorist and extremist recruitment videos and propaganda-spewing accounts are still too prominent to satisfy activists tracking these groups.
“They’re not doing a good job of moderation,” Southern Poverty Law Center data analyst Keegan Hankes told me. Hankes has tracked how domestic extremists and hate groups use social media. “When it comes down to enforcement, [social media companies] have been pretty lackluster across the board.”
To address this failure, Facebook, Google, Twitter, and Microsoft pledged in December to create a tool for flagging terrorist content. It will use “hashing” technology, in which photos and videos deemed to be terrorist content are tagged as such and added to a centralized database. If these images and videos crop up across networks, the “hash” ID tag will alert the companies, allowing them to review the content and remove it as they see fit. This tool won’t help when it comes to quelling new videos or images, but the goal is to reduce virality. The companies view this as a type of collaboration that preserves autonomy, since each partner will decide for itself what to ban and what to allow. If Facebook bans something, Twitter might allow it, and vice versa; each is not beholden to the other’s assessment. Deciding who is too extreme to speak is a slippery calculation, which will inevitably result in outrage and cries of censorship; these companies do not want to cede this type of decision-making to a collective, not when suspending the wrong account or banning the wrong photo is a public relations debacle in waiting.
Hany Farid knows about hashing-based systems. In 2009, the computer science professor and digital forensic expert developed a similar program that was created at Microsoft, PhotoDNA, which helps remove child pornography from the internet. Many large tech companies, including Facebook, Google, and Twitter, now use PhotoDNA, making it a linchpin in the effort to remove modern child pornography. Farid has long wanted to develop the same type of program for terrorist content, and he gave an interview to The Atlantic this summer explaining that he was waiting for companies to get onboard. To that end, he partnered with a nonprofit called the Counter Extremism Project (CEP), where he is now a senior adviser. His proposal: Just as with his pornography detector, Farid would create a system for flagging terrorist photos. With PhotoDNA, the centralized database of banned images is held by the National Center for Missing and Exploited Children; with this new system, the Counter Extremism Project figured a new clearinghouse could be established for the same purpose. It gave this hypothetical clearinghouse a name (the National Office for Reporting Extremism) and some vague guidelines (only the “worst of the worst,” as CEP CEO Mark Wallace described it, like beheading videos, where ambiguity is minimal). Last summer, the CEP brought its idea for the hashing tool to major social media companies, assuming they’d be eager to take advantage of the technology. The companies balked.
“My view is this is playing out in a very similar way with the child pornography work,” Farid told me. “In 2003 to 2008, the tech companies did exactly nothing. They just wrung their hands, they talked about how serious a problem it was, they apologized, they said they wanted to do more but they couldn’t do anything.” Farid said that it took five years after Microsoft approached him to persuade other technology companies to begin using PhotoDNA. “Some of them were faster adopters than others. Facebook was relatively fast, Google was just recently starting to do it — so you’re talking about almost a 10-year window,” he said. “Look at how these tech companies move, and tell me that you believe the story that it takes 10 years to develop something like this. I think they’re dragging their feet, and I think they’re doing it intentionally. I think they don’t want to get involved with these complicated issues, law enforcement, national security, filtering content. I think that the time for that discussion has come and gone.”
A Facebook employee with knowledge of the situation and who did not want to be identified told The Ringer that the company doesn’t expect to start sharing hashes with its partners until later this year, since the technical work is not yet complete.
So, to summarize: The CEP has the technology, and has offered it to these social media companies. But instead of taking this already-existing software, the platforms are choosing to wait and let a problem fester while they build their own versions.
“All we’re saying is, ‘Here’s a free piece of software that will do what you’re already doing, faster, cheaper, more efficiently, with less errors.’ So tell me, exactly what’s the problem? I’m puzzled by it,” Farid said.
Wallace shares Farid’s frustration. “I think that the social media companies don’t like anybody that challenges them, and I think that they’ve been, in a petty way, annoyed with us because we’ve challenged them on their removal stuff,” he said. “I think that’s a little bit unserious, but that’s the nature of some of the folks at the social media companies. I hope that they get past that.”

I reached out to all four companies to ask why they’d rebuffed the CEP’s offer. Facebook, Twitter, and Microsoft declined to comment, and Google never responded. However, an employee of one of the major tech companies with insider knowledge of the spat (and who asked that they and their company not be identified to prevent the source from getting in trouble with their employer) said the companies are wary of the CEP’s ties and tactics. “They have real doubts about Mark Wallace. They have questions about his funding, his lack of experience in tech,” the employee told The Ringer.
While the CEP bills itself as nonpartisan, it is staffed with a lineup of hawkish stars of the George W. Bush era. Its president, Frances Townsend, served in the Bush administration as chair of the Homeland Security Council (whose name and structure has changed over the years). Former senator and vice presidential candidate Joe Lieberman, an independent long known for his interventionist foreign policy stance, is on the advisory board. Wallace was the U.S. ambassador to the United Nations under Bush, and held several other positions in his administration; Wallace also debate-prepped vice presidential candidate Sarah Palin in 2008. Wallace, Townsend, and Lieberman are also involved with another nonprofit, United Against Nuclear Iran, which strongly lobbies against Iran. In other words, the top brass at the CEP are all part of a political milieu with which the tech companies may not want to tout strong ties. Although Silicon Valley isn’t nearly as liberal as it is often portrayed, there’s little for these companies to gain by yoking the issue of content moderation to a specific set of conservative politics.

Wallace would not tell me who the CEP’s donors are, and said that he did not believe they were controversial. I suggested to him that some companies may be reluctant to work with the CEP because it does not divulge its donors. “It’s kind of hard to raise money to fight violent extremism if you make your donors public, because then they all could be threatened by extremists, right?”
“I just think that’s a red herring and a shiny-ball strategy to distract from their own inability to be fully forthcoming about an appropriate policy prescription where they know that they are making a weak argument,” he said. “We think that the right model is either CEP or another third-party validator. … You have to have a third party, and it doesn’t have to be us.”
Whether or not the companies are preoccupied with the CEP’s politics, the nonprofit’s critiques of tech companies made a collaborative atmosphere improbable, the tech company employee said.
“[Wallace] came in and thought he could bully tech companies and get them to do what he wanted, and he talked to them, and they decided not to work with him,” the employee said. “It’s an example of how to mishandle an organization or tech in Washington. You would think, on paper, they would’ve had an opportunity to partner with some folks and make some change, and they just really blew it.”
The employee claims that out of all the tech companies involved, Wallace had the best relationship with Facebook, through his days in the Bush administration with Joel Kaplan, now Facebook’s vice president of global public policy (and who held a few posts under Bush, the last being deputy chief of staff for policy). But these ties were not enough. “They wanted Facebook to bless it and for the rest of the industry to fall in line behind Facebook. But Facebook did not do that,” the employee said.
Despite the obvious bad blood between the CEP and the tech companies, Wallace hasn’t given up hope that social media companies will eventually use the CEP’s hashing system. “Our position is that we’re still continuing to offer it to social media companies. Even if there are some personal feelings now, by all parties concerned, this is bigger than any personalities,” he told me.
One of Wallace’s major points of contention is that there’s nobody to keep the tech platforms’ terrorism database in check. He compared their strategy to allowing oil companies to self-regulate their environmental impacts. He makes a good point: Without a third party overseeing the effort, there’s no way to assess whether the companies are making the right calls. While Wallace’s concern is valid, the CEP’s project isn’t any more transparent about its own rules for flagging content.
“We’re intentionally not announcing the criterion for inclusion [in our database],” Wallace said. “I think people could take pot shots and I think we don’t want to do that.” When I said that I found that lack of transparency jarring, Wallace noted that he’d allow members of the press, including myself, to view the CEP’s database. “To counteract a very legitimate concern that you have, I think that [you] should have full access to our database to look at everything that’s included and if you object to it you should say so,” he told me.
Just like the social media companies it rails on, the CEP plans to keep its definition of terrorism to itself; the same exact transparency problem would exist if the companies decided to work with the CEP — and then there’d be the added concern that the nonprofit has such strong ties to neoconservative policymakers.

Different interpretations of what is and is not a terrorist group have already gotten social networks into conflicts. In April 2016, British activists accused Facebook of kowtowing to the Turkish government by removing posts critical of the government, as BuzzFeed News reported at the time. Facebook told BuzzFeed that its community standards do not allow references to certain political groups, including the Kurdistan Worker’s Party, which is designated a terrorist organization by NATO. However, activist Rosa Gilbert pointed out that one of the posts taken down references the YPG, a Kurdish militia group that isn’t considered a terrorist organization by the U.S. but is outlawed in Turkey. Gilbert and other pro-Kurdish activists argue that Facebook bans content offensive to Turkish rulers rather than content that runs afoul of its stated community guidelines.
Going further back, Google suspended the account of prominent Egyptian anti-torture activist Wael Abbas on YouTube in 2007 because Abbas posted violent footage — but rather than recruiting prospective terrorists, he said, it was meant to highlight human rights abuses. These companies have already shown that they are not skilled at evaluating context before removing posts. Just last week, activist Alexandra Brodsky tweeted screenshots of anti-Semitic insults polluting her mentions in an attempt to flag abusive accounts for Twitter — and the company suspended her account. Twitter reinstated Brodsky after the story was picked up by media outlets, but the company has clearly not worked out how to differentiate between hate speech and speech drawing attention to hate.
The definition of extremism is culturally bound. In some cases — the sort the CEP calls the “worst of the worst,” like beheading videos — there is a significant difference between an account sharing a beheading video as recruitment propaganda and an account sharing the same video as a piece of journalism. And while it is perfectly reasonable for tech companies to look for ways to get flagrantly horrifying, violent content from political extremists off their platforms, establishing a database like this demands care, finesse, and flexibility. In many cases, content deemed “extremism” by one faction of users is seen as activism by others.
And so this is what it’s come to for major tech companies and “terrorism flagging”: Platforms are proposing that they’ll do it themselves with no oversight from an objective third party, and they don’t even have the technology ready. But the alternative is not more appetizing; while the CEP has the tech ready to go, its route would still require the tech companies’ users to trust that the CEP’s secret criteria for determining what is and isn’t terrorist content is fair.
“One of our big calls to companies participating in this [creating their own systems for monitoring terrorism] has been transparency,” said Emma Llansó, Free Expression Project director for the Center for Democracy & Technology, a nonprofit that promotes internet freedom. “It might take the form of an independent ombudsman or some other kind of trusted independent entity to review the materials that end up in this database, so that there might be an independent evaluation of this project. Really sticking to the narrow definition of extreme and egregious terrorist propaganda — if we’re going to grant that that’s a narrow definition — is some way to establish that this hasn’t turned into a giant bucket of problematic content from across a bunch of different sites.”
The Southern Poverty Law Center’s Hankes takes a similar stance. “Transparency is really the key thing,” he said.
Llansó is concerned that automated takedowns would mean that tech companies won’t be able to evaluate the context of some videos and images. “If they’re going to go forward with this project, then that kind of case-by-case evaluation is the only way it might avoid some of those very obvious blanket-censorship issues,” she said. “But I think there’s a lot of pressure on them to do more takedowns faster for any number of kinds of content, so the recourse to automated systems is something we’ll see even more call for in the future.”
In December, Turkey’s interior ministry announced an investigation of 10,000 people for suspected terrorism support on social media. “They are accused of insulting government officials online, or what the ministry called ‘terror-related activity’ on the internet,” the BBC reported. This investigation illuminates the most potent danger of a centralized “terrorist content” database. While “terrorist content” might mean a beheading video to one administration, it has meant tweeting rude comments to at least one other. Federal law enforcement agencies under one administration might be more aggressive than when those same agencies are under another, and access to the database will be sought; this is why defining the term is a crucial shield against abuse; vague rules allow others to set boundaries and provide meaning.
None of the major tech companies have been forthcoming regarding the criteria they plan to use to define “terrorism.” I have been asking Facebook to explain how it defines terrorism for years and it has never answered the question. Google didn’t respond, and Twitter also has failed to offer its criterion. Microsoft declined to comment on its criteria for the forthcoming centralized terrorism database, but it did point The Ringer to a May 2016 blog post in which it defined “terrorism” as “material posted by or in support of organizations included on the Consolidated United Nations Security Council Sanctions List that depicts graphic violence, encourages violent action, endorses a terrorist organization or its acts, or encourages people to join such groups.”
The only social network that responded to my request for how it defines terrorist content with a detailed answer was Gab, a fledgling network that distinguishes itself with a laissez-faire approach to content moderation. Gab’s most high-profile users are various far-right figures, including white supremacist Richard Spencer, who was briefly kicked off Twitter, although anyone can sign up. Gab identifies terrorism using the U.S. Department of State’s definition and list of foreign terrorist organizations, as well as the FBI’s list of terrorists, terrorist groups, and terrorist activities. While it is not participating in a “hashing”-style database program like the one being developed by the big four, Gab’s chief communications officer, Utsav Sanduja, said it has moderators monitoring posts for illegal activity, and that the company is preparing an automated flagging system. “We are in the process of inventing certain artificial intelligence bots that would make sure certain things that are against the law are detected,” he said.
I find some of the far-right-wing rhetoric from Gab’s users to be ghastly reading, but the company’s forthright approach to moderation is a refreshing antidote to the muddled, opaque methods employed by larger companies. It is critical that these companies approach their terrorism moderation with transparency as a primary goal. While they are all privately run corporations that can suspend or ban users as they see fit, the push to establish a centralized database to make it easier to flag accounts should be accompanied by a push for clarity about what is and is not allowed on these platforms.

Moderators are inherently ideological, and terrorism moderators doubly so. “The idea of ‘moderating’ presupposes an outside vision of what is and isn’t acceptable in a conversation,” Adrian Chen wrote for The New York Times Magazine in 2015. “But when moderators set their own rules, with no incentive to conform to anyone else’s standard, they can look a lot less like custodians and a lot more like petty tyrants.” Petty tyranny is a look these major networks should be desperate to avoid. Yet none seem particularly apprehensive about the backlash that could ensue from taking down content without defining what makes it offensive. I worry that these companies lack the imagination to come up with a plan to evaluate language in which provocations to violence can be sifted out without curtailing speech that is simply provocative.
Even more concerning, if the social media companies do not improve their moderation efforts, they may be boxed into flawed moderation rules by legislation. In 2015, U.S. senators Dianne Feinstein (D-California) and Richard Burr (R–North Carolina) introduced the Requiring Reporting of Online Terrorist Activity Act. Its intention is to strong-arm the tech companies into prioritizing the monitoring of terrorist activity, but its language is so vague that it could be used to conscript the platforms into acting as government watchdogs for almost any type of speech that the government finds objectionable. Senator Ron Wyden (D-Oregon), who opposes the bill, said in a statement that it could “create a perverse incentive for companies to avoid looking for terrorist content on their own networks, because if they saw something and failed to report it they would be breaking the law, but if they stuck their heads in the sand and avoided looking for terrorist content they would be absolved of responsibility.”
The only way to approach moderation is to do so honestly, admit that it is suppressive and censorious, and be wholly self-aware and open about what is hidden and why it has been banished. Unfortunately, transparency and self-awareness are not hallmarks of Silicon Valley, and it appears the recent major pushes to purge platforms of terrorist content have resulted in little more than infighting and flawed thinking. Whether there is a way out remains to be seen.
An earlier version of this piece misidentified the name of the program Seamus Hughes is affiliated with; it is the Program on Extremism. It also misidentified former Senator Joe Lieberman’s party affiliation; he is an independent.
An earlier version of this piece incorrectly stated that Microsoft declined to offer its definition of terrorist content. The piece has been updated to reflect that the company provided a blog post containing its definition.