Twitter bots posted almost 65,000 messages during a four-week period, with their content showing a “clear slant towards the leave campaign”.

Published (Updated )

An army of almost 13,500 fake Twitter accounts tweeted extensively about the Brexit referendum only to disappear shortly after the vote, according to new research from City, University of London.

The Twitter bots posted almost 65,000 messages during a four-week period, with their content showing a “clear slant towards the leave campaign”.

Researchers Dr Marco Bastos and Dr Dan Mercea say the Twitter bots operated automatically as a “supervised network of zombie agents” and also tweeted pro-remain content, suggesting the accounts were being used in support of both sides.

The academics believe the mass coordination and sudden deactivation of the “sock puppet” accounts show how Twitter bots are being used strategically to amplify particular views during political events.

But the deletion of the accounts in the weeks after the EU referendum has ensured their human controllers remain hidden.

Key findings
  • 13,493 users deleted themselves or were suddenly blocked or removed by Twitter
  • An additional 26,538 suddenly changed their username
  • 5% of all EU referendum tweeters were either deleted or recycled with a new name
  • 31% of bot messages included the word “leave”, compared with 17% containing “remain”
  • Bots were eight times more likely to tweet leave slogans than other Twitter users
  • 63% of URLs in bot tweets no longer exist or do not work

The new study, published in the journal Social Science Computer Review, is the first to systematically identify the content tweeted by bots during the referendum campaign, the scale, intensity, and reach of their activity and other major characteristics of their behaviour.

The research also uncovered two distinct strategies for deploying botnets. A portion of the network was dedicated to retweeting other bots while another part only retweeted content from a small number of human users.

In addition to the deleted accounts, a further 26,500 suddenly changed their names shortly after polling stations closed, suggesting there is a potential market of Twitter bots that are repurposed from one campaign to the next.

The researchers analysed 39 key hashtags and keywords associated with the referendum and identified 794,949 accounts, which sent out ten million tweets over the four-week period of 10th June to 10th July 2016. From this group, they then found 13,493 accounts that matched the tweeting behaviour of bots according to a list of tell-tale characteristics.

An example of a particularly active bot was an account named @trendingpls, which tweeted 2,474 messages in the period. Bots were mainly active in the week preceding the vote and on the eve of the referendum, when there was a peak in activity between automated accounts.

How to identify a bot

  • Periods of high-volume posting, followed by a drop in activity levels
  • Sudden deletion at the same time as other suspected bots
  • Activity that does not follow daily human patterns influenced by work and leisure time
  • High ratio of retweets to tweets
  • Usernames with computer generated, uncommon words
  • High ratio of outward @-mentions to inward @-mentions
  • User account created in past two years
  • Low retweet reciprocity (retweeting others, but not being retweeted)

'Zombie agents'

Dr Marco Bastos, the lead researcher of the project, said: “We believe these accounts formed a network of zombie agents, given their orchestrated behaviour and known bot characteristics.

“Twitterbots can trigger retweet cascades in a fraction of the time required by active users to start cascades of comparable size, but are unsuccessful at generating large retweet cascades.

“We didn’t find evidence that bots helped spread fake news. Instead, they were invested in feeding and echoing user-curated, hyperpartisan and polarizing information.

“Unfortunately, it’s not easy for real Twitter users to spot bots because of the volume of data necessary to recognize their activity patterns, but this study shows how they can be identified with careful analysis.”

Other key findings

  • Most posts were retweets, with 54% of bots never authoring an original tweet
  • Five accounts alone tweeted 10% of all content posted by bots (@trendingpls, @EuFear, @steveemmensUKIP, @uk5am, and @no_eusssr_thx)
  • Fake accounts were successful at getting their tweets retweeted up to 600 times
  • Bots tweeted an average of five posts compared with 1.2 for other users

'Artificial levels of public support'

Dr Dan Mercea said: “The purpose of these bots was to swell artificial levels of public support for different sides of the vote by tweeting or retweeting both human users and other fake accounts.

“This is clear evidence of strategic communication using bots, which were made to post large numbers of messages at certain times and created a false impression of public popularity towards different ideas.”

The paper, The Brexit Botnet and User-Generated Hyperpartisan News, has been published by Social Science Computer Review

Related schools, departments and centres