TechTech

The Big, Bad Bot Problem

Twitter’s effort to address its bogeyman is long overdue, but years of neglect will make it tough to eradicate these menacing machines
Getty Images/Ringer illustration

In the early, innocent days of social media, it took just 138 minutes for nine Twitter accounts to execute a small conspiracy. On January 15, 2010, a group of users with names like @BrianD82 and @Leann_az bombarded people with messages about Massachusetts Attorney General Martha Coakley, who was running for U.S. Senate in a special election. Each tweet linked to a website that took a radio interview snippet out of context to make it seem as if Coakley wanted to ban Catholics from working in emergency rooms. “AG Oakley thinks Catholics shouldn’t be in the ER, take action now!” said one message. “Catholics can practice medicine too!” said another.

Though the nine accounts had names that sounded human enough, their behavior was unnatural. Collectively, they tweeted 929 times during their two-hour spree, replying to other users who were talking about the election. “They were tweeting at the same rate, at the same time, and with the same style,” says Takis Metaxas, a computer science professor at Wellesley College. “That gave us a lot of pause about what was happening.” The nine accounts weren’t people—they were bots, automated Twitter accounts that send messages without direct human intervention.

Metaxas and his colleague Eni Mustafaraj published a paper about the strange patterns they observed, calling the behavior of the nine automated accounts a “Twitter-bomb.” The bomb directly touched 573 users, but the fallout—users who followed accounts that retweeted the deceptive messages—reached more than 60,000. The researchers attributed the bots to the American Future Fund, a conservative advocacy group based in Iowa that launched the anti-Coakley website. Today, social media researchers cite this activity as the first known bot attack on Twitter meant to influence political discourse and attempt to swing an election.

In the eight ensuing years, bots have transformed from a cute quirk of Twitter’s open-platform roots into a political menace that threatens to undermine democracy, debate, and even public safety—if you believe the most alarmist headlines, anyway. The actual influence of bots, or even the precise number of them, is impossible to pinpoint. (Twitter says up to 8.5 percent of its 330 million active accounts are bots; researchers have pegged the number as high as 15 percent.) But the ambiguity grants them even more power in warping the internet’s hivemind. Along with “trolls” and “the Russians,” bots have become the nebulous monsters of Twitter, hiding in plain sight. And no one—including Twitter itself—seems quite sure what to do about them.


Bots long predate both Twitter and the commercial internet. In 1966, MIT professor Joseph Weizenbaum created ELIZA, a conversational computer program meant to emulate a psychotherapist. It—she—was hardly a breakthrough in artificial intelligence, using scripted responses to a set list of keywords to simulate dynamic interaction. Weizenbaum intended for the bot to be a parody of psychotherapy techniques, but was surprised when test subjects developed emotional attachments to his lines of code. “It does point to something basic in people, that we tend to give the thing we’re interacting with the benefit of the doubt in assuming it’s grasping the process and emotions behind what it’s doing,” says Tom Haigh, an associate professor at the University of Wisconsin–Milwaukee who studies the history of computing.

The early Twitter bots were much like ELIZA, winkingly low-fi visions of a future when people casually interacted with their digital companions. Because Twitter was created as a public, open social network, developers could use its API to easily build automated accounts that made unique use of the web’s vast knowledge. There was a bot that tweeted every new movie coming to Netflix and another that tweeted every word in the dictionary. There was a bot that invented portmanteaus and another that mined Twitter for haiku. I still enjoy reading the bot that creates magical realist plots and the one that parodies Ice-T’s drug monologues on Law & Order: SVU. “Twitter bots represent an open-access laboratory for creative programming, where good techniques can be adapted and bad ones can form the compost for newer, better ideas,” botmaker Rob Dubbin wrote in a cheery New Yorker article in 2013.

But in addition to these fun diversions, Twitter became a breeding ground for bad bots. Early on they were crude, inundating users with links to financial scams or pornography. Such problems are common in the early life of a new communication platform—spam was the scourge of email once upon a time—but Twitter was a notoriously ramshackle technological operation that could barely keep its site online. Efforts to keep the bots at bay were thwarted by the people who sold automated accounts in bulk for a few pennies each. If Twitter started deleting suspected accounts with incomplete profiles, botmakers would give them pictures and bios to seem more legitimate.

At the same time, the capabilities of bots grew more sophisticated. In 2014 a cybersecurity company devised a program that could trawl through users’ tweets to send them personalized phishing attacks. The same year, a group of researchers conducted a study in which a small group of bots amassed tens of thousands of followers by simply recycling other users’ tweets to appear human, following lots of other accounts, and geo-tagging their messages so they looked more authentic. Today more advanced bots can hold rudimentary conversations with humans (… sort of) and even Google evidence to support their arguments. And when all else fails, bots can be converted into cyborgs, relying on human intervention to avoid detection by Twitter’s algorithms or suspicious users.

As bots have become more complex, the expertise needed to launch them has become simpler. “Around 2015, we started observing more and more open-source implementations of bots that popped up on platforms like Github as well as on the dark web for sale,” says Emilio Ferrara, an assistant research professor in computer science at the University of Southern California. “When you lower the barriers to access some technology, by definition it will become widely spread.”

It is only now becoming clear by degrees just how widespread social media bots have become. A study by Ferrara released the day before the 2016 presidential election found that almost 20 percent of all election-related tweets came from about 400,000 Twitter bots. Over the ensuing year and a half, Twitter has slowly divulged that more than 50,000 Russian-linked bots were active around the time of the election, tweeting messages that were seen hundreds of millions of times. Bots have also influenced political discourse in France, Mexico, and other countries (they’re also thought to be a problem on Facebook, but researchers have little access to the site’s content to measure their influence).

Related

The revelations of Russian interference online, mixed with the daily revelations from special counsel Robert Mueller’s investigation into possible ties between the Trump administration and the Kremlin, has created a permanent sense of paranoia on Twitter. “You’re a bot” is now a common accusation. A recent overnight purge of bots led to conspiracy theories that Twitter was punishing conservatives. And claims by media outlets that bots drove online conversation about the Parkland shooting and the #ReleasetheMemo hashtag give the impression that the inmates are running the algorithms.

The reality is a little more complicated. The true strength of bots doesn’t come from their ability to imitate humans but rather from their usefulness as foot soldiers in Twitter’s natural state of chaotic, endless combat. On a platform where every user is potentially anonymous and metrics such as likes, followers, and retweets determine the value of a given post, they manage to turn obfuscation into persuasion. A recent Wired story by Molly McKew analyzed how the conspiracy theory calling a survivor of the Parkland shooting a “crisis actor” was seeded and propagated on Twitter. The story, published by the alt-right website Gateway Pundit, was first amplified by a series of simple bots that simply tweeted the headline. Then Gateway Pundit founder John Hoft tweeted a link to the story, which was quickly amplified by bots with huge followings. Next, Chelsea Clinton quote-tweeted Hoft, denouncing the conspiracy theory, and her tweet was also retweeted first by a group of automated accounts. The bots’ retweets signal to Twitter that a post is popular and should be served in more users’ timelines. Clinton’s tweet went viral, gaining 23,000 retweets, and the crisis actor story dominated news coverage for days.

Did the bots really poison social discourse, or had we already created a toxic stew that was destined to attract more bacteria? In 2010, a squadron of rogue accounts blaring a conspiracy theory seemed abnormal—today it’s just another day on Twitter, magnified both by bots and the people eager to denounce it. Bots are a problem not only because they warp discourse but because they reward humanity’s worst impulses. Ferrara’s election research showed that bots were able to earn retweets at the same rate as humans. They are, in some sense, the most effective kind of Twitter user in the modern age—loud, unwavering in their opinions, and backed up by lots of equally loud friends.

“It’s something that’s hard to wrap our heads around why hundreds of thousands if not millions of people would fall for this fairly unsophisticated operation,” Ferrara says. “It wasn’t rocket science. It was very basic stuff.”

Some argue that the bot-pocalypse is overhyped. BuzzFeed recently noted that the analytics dashboard often cited in claims of post-election Russian influence tracks just a few hundred Twitter accounts with tenuous ties to Russia. Metaxas, the Wellesley professor who studied the first Twitter bot attack in 2010, is also skeptical. “I don’t know of any trends, but I believe that the role and number of the malicious bots are rather exaggerated in the usual reporting,” he said in an email.

While it’s hard to say how effective bots are at changing people’s opinions or their voting habits, they’ve changed the perception of Twitter. Bots were a nuisance; now they’re a national security threat. They were an easy way to boost follower counts; now they’re the subject of a New York Times investigation that Twitter felt obligated to respond to. The social network wants to use real-time conversation to effect change in the world, but its entire digital universe, from the follower counts to the trending topics to the tweets endlessly emerging from mysterious anonymous accounts, feels increasingly removed from reality.


On March 1, Twitter CEO Jack Dorsey issued a series of contemplative, vaguely remorseful tweets in the vein of Mark Zuckerberg’s come-to-Jesus moment on Facebook at the start of the year. “We have witnessed abuse, harassment, troll armies, manipulation through bots and human-coordination, misinformation campaigns, and increasingly divisive echo chambers,” he wrote. “We aren’t proud of how people have taken advantage of our service, or our inability to address it fast enough.”

Dorsey said the company was soliciting ideas to build a “systemic framework” for better online interactions, starting with a methodology for measuring the overall health of Twitter conversations. Users, of course, never run out of ideas for fixing Twitter. A “report bot” button has become a common request. Labeling automated accounts is another solution, though some researchers point out that if the effort is not comprehensive, it could make the bots that slip through the cracks even better at passing as human. The government could also play a role in curbing the problem. Under congressional pressure, Twitter announced in the fall that it will begin disclosing who pays for political ads on the platform. But Ferrara says those disclosures should also extend to bots used for political aims. “Social media has been shown to be as powerful, if not more, as the traditional print press and TV,” Ferrara says. “You need to draw the equivalence and make sure that the regulations apply and that they are enforced with fees … whenever they are violated.”

And then there’s the nuclear option—banning bots. It’s an unlikely fix because bots help Twitter’s bottom line by propping up its number of active users. Eliminating automated posting also wouldn’t stop hackers from devising a program that could simulate a human typing tweets into a web browser. And despite their current bad rap, bots remain a unique and compelling use of Twitter’s grand experiment in harnessing humanity’s collective ingenuity. “If you cut off API access to bot developers, you amputate a lot of creativity and apps that have made your platform great,” says John Emerson, a freelance consultant who makes bots that hold powerful institutions to account, like The New York Times and the New York Police Department.

Twitter, like Facebook and YouTube, will have to slow things down and spend more money if it wants to avoid an extreme measure like a bot ban. Maybe bots should be allowed only on authorized accounts that have been vetted in some way, like the platform’s verified system. More human moderators who can ferret out bot activity would probably help. And a stone-faced admission to investors that anti-bot efforts will lower the company’s usage metrics, like the one Zuckerberg offered about Facebook at the start of the year, would be a clear sign that Twitter takes the problem seriously and isn’t just trying to ride out a wave of bad press.

“Abusive bots are not abusive because they are bots, but because they are abusive,” Emerson says. “Twitter needs to invest in putting real talent and humans and eyeballs on the abuse problem. It’s expensive and doesn’t scale the way technology scales, but I think it’s the best way forward.”

Keep Exploring

Latest in Tech