Diamond Reynolds broadcasts herself using Facebook Live from the passenger seat of a car in Falcon Heights, Minnesota, shortly after an officer shot Philando Castile. “Stay with me,” she says to Castile, who is covered in blood. He is dying. She explains what happened, narrating to her audience: “We got pulled over for a busted tail light in the back and the police — he’s covered — they killed my boyfriend. He’s licensed, he’s carried, so he’s licensed to carry. He was trying to get out his ID and his wallet out his pocket and he let the officer know that he was, that he had a firearm, and he was reaching for his wallet and the officer just shot him in his arm.”
You may have seen this video. It has been viewed more than 5.6 million times on Facebook; it has been discussed and replayed on television. You have probably, at least, heard about it, likely in the context of the grotesque triptych of violence in the United States last week — Castile’s killing by police, Alton Sterling’s killing by police, the killings of police guarding a peaceful protest in Dallas.
This is not necessarily the use case that Facebook imagined for Facebook Live. It’s certainly not the use case that Facebook promoted. When Facebook offered live video to all of its users this year, it was framed as a fun diversion. “Tips for using Facebook Live” includes this example: “Ricky Gervais in the bath.” Most are light entertainment, and the ones that could be seen as “political” are broadcasts from politicians already in power. Twitter was meant to be a way to tell your friends what you ate for breakfast and turned into more (and less) than that because its users were more creative than its founders. In the same way, Facebook Live was to be another cheery venue for recipe videos. Facebook sees Live — and really, its entire platform — differently than its users do. It was not promoted as a digital flare gun to draw attention to atrocity, and it became one only because of user ingenuity.
Reynolds used Facebook Live as a documentary tool. She saw no other recourse. She could not call 911 or flee when the police were the perpetrators and quick to shoot. Her broadcast was an act of journalism. It was an act of sousveillance — she was monitoring authority. (It was also a canny and brave survival tactic, although that is difficult to write without feeling like a piece of shit — Reynolds deserves something better than valorization. She deserves for this never to have happened.)
“I wanted it to go viral,” Reynolds said in a follow-up broadcast.

Facebook Live’s first “viral hit” was nothing like Reynolds’s video. It was BuzzFeed’s merry blogger retinue exploding a watermelon. The all-time Facebook Live virality juggernaut is still “Chewbacca Mom.” Candace Payne, a stay-at-home Texas mom, gained viral fame after broadcasting herself playing around with a Chewbacca mask. Her video was gleeful, spontaneous, and almost impossibly inoffensive and apolitical. In addition to receiving scholarship money and many breathless write-ups, Payne also caught the attention of Mark Zuckerberg, who invited Chewbacca Mom on a trip to Facebook headquarters to celebrate.
While Reynolds’s video also went viral, it also disappeared briefly, prompting concerns that Facebook had either deliberately or accidentally taken it down due to a careless moderator or a triggered algorithm. Facebook insists that the video went offline due to a “technical glitch,” one that came from its end, and not from user behavior. When the video reappeared, it had been affixed with a warning about graphic content. Facebook has an experimental monitoring program in place for Live, in which the company deploys human content monitors once some videos reach a point in popularity, but this test-run moderation wasn’t triggered for the Reynolds video.
Facebook has admitted to mistakenly pulling videos and content down in the past based on bad flagging, but when I asked the company for something less vague (and fake-sounding) than its “technical glitch” excuse this time around, it pointed me toward a recent blog post about Facebook Live moderation. “The rules for live video are the same for all the rest of our content. A reviewer can interrupt a live stream if there is a violation of our Community Standards. Anyone can report content to us if they think it goes against our standards, and it only takes one report for something to be reviewed,” it reads.
Facebook also laid out its policies regarding violence in content posted and broadcast on its platform. Basically, treatment will be decided on a case-by-case basis. “One of the most sensitive situations involves people sharing violent or graphic images of events taking place in the real world. In those situations, context and degree are everything. For instance, if a person witnessed a shooting, and used Facebook Live to raise awareness or find the shooter, we would allow it. However, if someone shared the same video to mock the victim or celebrate the shooting, we would remove the video.”
I have so many questions about these statements. Before releasing Facebook Live, did Facebook employees discuss how the company would respond to violent broadcasts, and is this new case-by-case policy a significant change from its original plan? The company won’t say what steps it took in anticipation of violent footage, but it is difficult to imagine that this was entirely unanticipated. Who exactly decides whether a video is raising awareness or celebrating, and are the two always mutually exclusive? These posts hat-tip transparency without clearing up much of anything.
Facebook’s community standards are so vague that they are meaningless. For example, they ban “terrorism” without offering a definition. The company is already facing criticism about how it applies community standards to decide when to pull posts; British activists have accused the company of deleting photos of protest signs and graffiti criticizing Turkey over its treatment of Kurds, while Facebook has cited its terrorism ban as a reason why the content got pulled. Just as the amorphous terrorism ban gives the company so much leeway that it can decide what is and isn’t terrorism and terrorism-adjacent content, its guidelines on violence offer little practical value.
Last year, a gunman killed journalists and posted video of the slayings to his Facebook and Twitter accounts. Facebook took down the graphic video by deleting the killer’s account, but by the time it did, the video had been copied and disseminated online. In that case, the video was clearly celebrating the killings, as it came from the perpetrator — but what if it had also served a purpose as evidence to both track down and convict him?
As it issued its vague answers about how it plans to respond to violent broadcasts, Facebook put up an enormous “Black Lives Matter” sign at its headquarters.
“Saddened at events of last few days but humbled to have built fb live which is helping people shed more light,” Vadim Lavrusik, a former Facebook employee who worked on Live, tweeted (and then deleted) on Thursday.
“The images we’ve seen this week are graphic and heartbreaking, and they shine a light on the fear that millions of members of our community live with every day. While I hope we never have to see another video like Diamond’s, it reminds us why coming together to build a more open and connected world is so important — and how far we still have to go,” Zuckerberg wrote in a Facebook post last week.
Clearly, Facebook is prepared to issue such statements, but it doesn’t appear as prepared for activist journalism on Facebook Live. And now it’s reckoning with that lack of foresight, however clumsily.
Facebook has anointed itself as the gatekeeper of the news. It has been so successful at rerouting browsing habits through its News Feed that it can upend media organizations when it rejiggers algorithms. This was an ambitious plan when it was focused on channeling all preexisting media outlets through Facebook. But with Live, Facebook is now able to control the conversation even more.

What we see is important, and political.
In May, FBI director James Comey claimed that a “viral video effect” was causing increased violence, because police officers were scared to do their jobs for fear that their law-enforcement activities would go viral. Comey’s luxuriously stupid theory has one nugget of truth — these videos are going viral, and they are indeed exposing bad deeds by police. Video has played a critical role in exposing many recent cases of extrajudicial killings of black people by police officers. Video offers unambiguous proof that Eric Garner died at the hands of police in 2014, and that Alton Sterling was also killed by police last week. The video footage that shows a Chicago police officer shooting Laquan McDonald to death was suppressed for 400 days, and it exposed the lie in the narrative that authorities wanted to sell.
I don’t mean to argue that video is immune to distortion, or free from misinterpretation. A camera cannot re-create what happened. It can show only one vantage, and that vantage is certainly not an infallible piece of a cosmic truth. Research on police body cameras underlines how they prioritize an officer’s point of view. But often that one vantage is absolutely critical for exculpating the innocent and exposing the guilty. When you have a police officer’s word against a dead person’s silence, or a police officer’s word against a witness, video may sometimes be the only evidence that shows us the real aggressors.
In Margaret Atwood’s novel Oryx and Crake, the young protagonist and antagonist casually watch snuff films and kiddie porn as children themselves. The crudeness of the dystopian world they’ve grown up in has leached horror from their emotional response networks. When Atwood wrote the book, shock websites like Rotten.com were already thriving portals to almost unlimited gore, available to any kid with a modem and relaxed parental supervision. Her fiction wasn’t so much speculative as it was reflective.
Facebook isn’t going to devolve completely into LiveLeak.com, but Facebook Live has made the social network a destination website for violent videos. You can watch Antonio Perkins die; you can watch Brian Fields get shot five times. Both Chicago-area men were broadcasting themselves when their assailants appeared. As the service continues to grow, especially with so much attention paid to Reynolds’s video, it is likely that the number of graphically violent broadcasts will increase. Perhaps people will move on from Facebook Live and choose another broadcasting platform, and the company will be able to resume its plan to sucker media companies into gimmicky freebies. But if Facebook Live keeps growing, the tension between what Facebook wants Live to be and what it is turning into will also grow.
Witness, a grassroots organization that trains people to film human-rights abuses, does not view Facebook Live as a cure-all for documenting violence. “We view tech as a tool, not a solution. So platforms like Facebook Live and Periscope can certainly be good tools for people to expose police violence, but as with all new tech tools, it takes time for the challenges and unintended consequences to be addressed, including how tech companies handle violent content being streamed on their sites. Video takedowns have plagued social media since its inception, and because of this, there have been hours of valuable evidentiary footage lost from conflicts around the world,” Witness senior engagement coordinator Jackie Zammuto said via email. “Generally, we advise people not to rely solely on platforms for preserving their media and instead provide guidance on how to safely archive valuable footage for use in advocacy or evidentiary settings.”
Desmond Cole, a journalist and activist in Toronto, recently told CTV News that he sees videos of police killing black men as potentially harmful. “People sometimes say that these videos are a good thing, and I’m not so sure, because I feel like the public just may be getting desensitized,” he said. Concern over desensitization is valid. Empathy fatigue can set in against a constant barrage of violent images, inuring us from feeling appalled. People may share videos of other humans suffering and dying for awful reasons — for a morbid distraction, for the sickening thrill of a snuff-film stomach thud, or to perform wokeness or exquisite sensitivity without actually caring.
Then again, audience callousness does not invalidate the importance of documenting state violence. And being tired of seeing something isn’t the same as wishing it didn’t exist.

Facebook is now a landing pad for footage that can shape public narratives, footage that can lock people up and set people free. If someone is broadcasting content that Facebook deems inappropriate, the company can shut down the live broadcast before it ends. This means that Facebook can decide to stop someone from documenting something using its platform, that it can make a judgment call about whether violence is good to document or bad to document as it is happening.
The thing that Facebook isn’t mentioning as it lays out its moderation policy is that the same video that can raise awareness for one can be reposted in twisted celebration by another user, or that the same video that depicts an awful crime — even an awful crime gleefully committed — may still have enormous value as a historical document. It’s good that the company is attempting nuanced moderation, but we should be wary of how this will play out.
Can we trust Facebook to appraise which brutalities are beneficial and which should be hidden? This is not to say that Facebook will pull vital citizen journalism. This is to say that Facebook’s content moderators decide who gets to speak.