
Facebook, like many American institutions, is going through a postelection identity crisis. The social network has been besieged with demands that it fix its “fake news” problem, which became politically charged during the election season and was especially effective at propagating false right-wing news, according to a BuzzFeed analysis. Facebook employees have reportedly formed “renegade” groups to address the issue. And Barack Obama, the most famous victim of our fact-optional media environment, has said that if social media continues to present true and untrue information without distinction, “we won’t know what to protect.”
The alarm bells finally seem to be ringing for Mark Zuckerberg, who on Friday announced several steps that Facebook plans to take to combat fake news. The problem is a confounding one, not least because “fake news” is a generic term that can be used to describe satirical pieces, deliberate hoaxes, money-grubbing sensationalism, and more. Facebook’s solutions are wide-ranging, and in some cases run counter to what we know about the company’s short-term business interests and relentless data-driven approach to increasing user engagement. Here’s a look at the challenges facing each of Zuckerberg’s specific ideas.
This is the solution that is closest to Facebook’s wheelhouse. Blatantly false headlines, like “Pope Francis Shocks World, Endorses Donald Trump for President,” can be readily checked against credible news sources. Websites that regularly post information that has been confirmed false via fact-checking or regular flagging by individual users could see their content ranked lower in people’s News Feeds. Systems like this are already in place to detect email spam, another nefarious scourge of the web that is impossible to define in specific terms. A group of students at a Princeton hackathon even developed a Chrome extension that automatically fact-checks News Feed posts. The technology to identify blatantly false news exists — the real question is what Facebook will choose to do with the content it categorizes as “misinformation.”
Though Sheryl Sandberg backed Hillary Clinton and Mark Zuckerberg verbally subtweeted Donald Trump earlier this year, Facebook remains acutely wary of accusations of partisanship. The company fired its Trending news team in August after a report that it was suppressing articles from conservative outlets, and multiple postelection reports have claimed the episode made Facebook indecisive about how to deal with fake news. Relying on third-party fact checkers could lessen scrutiny on Facebook’s verification process and give the company another scapegoat when mistakes are inevitably made. Zuckerberg has already mentioned Snopes as a useful third-party source. Other fact-checking organizations have offered their services to Facebook.
This is where things get dicey, and where Facebook’s challenge diverges from the role of spam blockers. While spam is a nuisance every user is all too happy to be rid of, a person who has chosen to post a piece of “fake news” has likely already incorporated it into their worldview. “When you see a post that says ‘Clintons suspected in murder-suicide’ and you retweet or repost it, it’s not a neutral transaction,” Mike Caulfield, the director of blended and networked learning at Washington State University Vancouver, wrote on his blog. “You move from being a person reading information to someone arguing a side of an issue, and once you are on a side of the issue, no amount of facts or argument is going to budge you.”
Facebook could find itself getting into ideological arguments with its own audience, which is already a regular problem for the social network. Throwing up a warning in this scenario creates a lot of friction in the user experience, and Facebook hates friction. The company actually already has a warning system for hoaxes, but the text is more a polite, quiet reminder than an alarm signal. A more aggressive system that deters users from sharing pieces of content would fly in the face of the company’s business interests. I’d expect the company to focus on more subtle measures, like tweaking the News Feed algorithm to decrease the spread of fake news (the algorithm is already regularly changed to limit the reach of things like YouTube videos).
Technically, you can report news articles as fake on Facebook right now, but the option is buried under several menus and an easy-to-miss icon in the top-right corner of News Feed posts. Facebook has been very, very reluctant to mainline any options in its user interface that do not encourage a happy-go-lucky feedback loop — it took us more than six years to get an option besides the “Like” button. Giving the “report fake news” button more prominence would be a big shift for the company and, again, would increase the much-dreaded friction.
The “related articles” box that appears below some News Feed stories has been surfacing fake articles for years. A convenient fix for this problem would be to populate it with articles from a list of trusted media sources, like Facebook was using for the Trending box when it was curated by humans. Make the list public and make the qualifications for getting added to the list specific.
Facebook announced last week that it would explicitly ban fake news sites from placing ads through its audience network, which allows companies to market on third-party sites. This is a nice-sounding gesture that won’t actually do much to disrupt “fake news economics.” Fake news generates attention by generating direct, organic attention on Facebook itself. Any solution must center on the way News Feed ranks stories and the editorial standards of content that is allowed to appear directly on Facebook.
“Some of these ideas will work well, and some will not,” Zuckerberg acknowledges in his post. These solutions paint a picture of a company that will ultimately be battling with itself as it takes on fake news. Since the creation of the “Like” button in 2009, Facebook has been constructing a system that thrives on a shallow positive feedback loop. We post stories not knowing whether our friends will read them but at least we can make a guess at how many of them agree with the headline. And those headlines that stick are rewarded with “virality” and are allowed to spread to more users’ feeds and earn more impulsive Likes. Users are rewarded with a virtual thumbs-up that we’ve slowly granted legitimate psychological meaning. Media outlets are rewarded with eyeballs that can be converted to ad dollars. Facebook is rewarded with a $347 billion market valuation. “Fake news” isn’t a glitch in the system, but rather the Like economy working at peak efficiency. The question now is whether Facebook is really willing to look beyond engagement metrics as it tries to tackle this significant problem.