Skip to main content
Open this photo in gallery:

NATO researchers have found that social media companies like Facebook do a poor job of policing automated bots that can manipulate their platforms.Johanna Geron/Reuters

“A very #MerryChristmas to all,” Margrethe Vestager, Europe’s top antitrust enforcer, wrote on Facebook last December. Her post attracted 144 likes.

A few months later, as an experiment, researchers paid a company a few dollars to attract attention to her well wishes. In 30 minutes, the post had 100 more likes. The researchers had similar results on a holiday post on Ms. Vestager’s Instagram account and on a Christmas tweet from Vera Jourova, the European Union’s justice commissioner.

Companies such as Facebook Inc. and Twitter Inc. are poorly policing automated bots and other methods for manipulating social-media platforms, according to a report released on Friday by researchers from the NATO Strategic Communications Center of Excellence. With a small amount of money, the researchers found, virtually anyone can hire a company to get more likes, comments and clicks.

The group, an independent organization that advises the North Atlantic Treaty Organization, tested the tech companies’ ability to stop paid influence campaigns by turning to 11 Russian and five European companies that sell fake social-media engagement. For €300 ($440), the researchers bought more than 3,500 comments, 25,000 likes, 20,000 views and 5,000 followers, including on posts from prominent politicians such as Ms. Vestager and Ms. Jourova.

After four weeks, about 80 per cent of the fake clicks remained, the researchers said. And virtually all of the accounts that had been used to generate the clicks remained active three weeks after researchers reported them to the companies.

The report spotlights the continuing challenges for Facebook, YouTube and Twitter as they try to combat online disinformation and other forms of online manipulation. After Russia interfered in the United States’ 2016 presidential election, the companies made numerous changes to reduce the spread of online disinformation and foreign interference. In recent months, the platforms have announced takedowns of accounts in China, Saudi Arabia and, most recently, Africa, where Russia was testing new tactics.

But the report also brings renewed attention to an often overlooked vulnerability for internet platforms: companies that sell clicks, likes and comments on social-media networks. Many of the companies are in Russia, according to the researchers. Because the social networks’ software ranks posts in part by the amount of engagement they generate, the paid activity can lead to more prominent positions.

“We spend so much time thinking about how to regulate the social-media companies – but not so much about how to regulate the social-media manipulation industry,” said Sebastian Bay, one of the researchers who worked on the report. “We need to consider if this is something which should be allowed but, perhaps more, to be very aware that this is so widely available.”

From May to August, the researchers tested the ability of the social networks to handle the for-hire manipulation industry. The researchers said they had found hundreds of providers of social-media manipulation with significant revenue. They signed up with 16.

“The openness of this industry is striking,” the report says. “In fact, manipulation service providers advertise openly on major platforms.”

The researchers bought engagements on about a hundred posts on Facebook, Instagram, Twitter and YouTube. They saw “little to no resistance,” Bay said.

After their purchase, the researchers identified nearly 20,000 accounts that were used to manipulate the social media platforms, and reported a sample of them to the internet companies. Three weeks later, more than 95 per cent of the reported accounts were still active online.

The researchers directed most of the clicks to posts on social-media accounts they had made for the experiment. But they also tested some verified accounts, such as Ms. Vestager’s, to see if they were better protected. They were not, the researchers said.

The researchers said that to limit their influence on real conversations, they had bought engagement on posts from politicians that were at least six months old and contained apolitical messages.

Researchers found that the big tech companies were not equally bad in removing manipulation. Twitter identified and removed more than the others, the researchers found; on average, half the likes and retweets bought on Twitter were eventually removed, they said.

Facebook, the world’s largest social network, was best at blocking the creation of accounts under false pretenses, but it rarely took content down.

Instagram, which Facebook owns, was the easiest and cheapest to manipulate. The researchers found YouTube the worst at removing inauthentic accounts and the most expensive to manipulate. The researchers reported 100 accounts used for manipulation in their test to each of the social-media companies, and YouTube was the only one that did not suspend any and provided no explanation.

Samantha Bradshaw, a researcher at the Oxford Internet Institute, a department at Oxford University, said easy social-media manipulation could have implications for European elections this year and the 2020 presidential election in the United States.

“Fake engagement – whether generated by automated or real accounts – can skew the perceived popularity of a candidate or issue,” Ms. Bradshaw said. “If these strategies are used to amplify disinformation, conspiracy and intolerance, social media could exacerbate the polarization and distrust that exist within society.”

Ms. Bradshaw, who reviewed the report independently, said the reason accounts might have not been taken down was that “they could belong to real people, where individuals are paid a small amount of money for liking or sharing posts.” This strategy, she pointed out, makes it much harder for the platforms to take action.

Still, she said the companies could do more to track and monitor accounts associated with manipulation services. And the companies could suspend or remove the accounts after several instances of suspicious activity to diminish inauthentic behaviour.

“Examining fake engagement is important because accounts don’t have to be fake to pollute the information environment,” Ms. Bradshaw said. “Real people can use real accounts to produce inauthentic behaviour that skews online discourse and generates virality.”

Your time is valuable. Have the Top Business Headlines newsletter conveniently delivered to your inbox in the morning or evening. Sign up today.

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe