Advertisement

Indiana University study shows how social bots escalate the spread of fake news

Bots can identify and target influencers

fake_news

By Heather Hamilton, contributing writer

A recent study examining the way that fake news is spread has confirmed, unsurprisingly, that social media is one of the most efficient ways to spread stories and that bots are the chief instigators. PhD candidate Chengcheng Shao and a team of researchers at Indiana University in Bloomington looked at the spread of fake news on Twitter, offering suggestions on curbing what the MIT Technology Review refers to as an epidemic. Shao and his team present clear guidelines for curbing the epidemic.

Although fake news has always existed, it re-entered the popular culture zeitgeist following the election season — and with a president who relishes the chance to engage on social media, it shows no signs of stopping. The 45th president of the United States recently came under fire for retweeting what was likely a social bot, according to the Washington Post.

Fake news refers to any news that is false or misleading; its distribution has grown so vast, leading to the creation of a variety of fact-checking sites in the last year. On sites like factcheck.org, a user can find a list of the most common claims that these sites are asked to fact-check, in addition to 122 websites that are repeat offenders. Some sites tell blatant lies, while others focus on satire, never purporting to tell the truth. But things can get a bit murky if you’re not paying attention or fall into an echo chamber.

For Shao’s purposes, the team also included satire sites (like The Onion) in their study. “We did not exclude satire because many fake news sources label their content as satirical, making the distinction problematic,” he said.

By monitoring 400,000 claims made by the websites on the list and examining how they disseminate information across social media — in this case, Twitter — the researchers collected approximately 14 million posts that mention said claims. They also looked at 15,000 stories written by organizations that fact-check and about 1 million Twitter posts that mention them.

The researchers examined the Twitter accounts responsible for spreading the news, gathering around 200 of the user’s most recent tweets. Their intent was to study their behavior and determine if each was a human or a bot. Once the distinction was made, Shao looked at how each spread news by developing two online platforms: Hoaxy , which tracks fake news claims, and Bolometer, a tool that determines whether Twitter accounts are run by humans or bots.

According to their website, Hoaxy is a public tool that enables a user to visualize the spread of claims and fact checking. In the search bar, a user would enter a potential claim. The search results will return a list of claims and fact checks. A user can then either access the text or click on relevant articles and visualize, which will give you a visual representation of how the news is spreading, even identifying specific Twitter users. 

Botometer examines activity on a Twitter account and issues a score based on how likely the account is to be a bot. A high score means an account is more likely to be a bot. It isn’t perfect—if you search an organizational account (@BarackObama, for example) it may misidentify it. By simply entering in a Twitter handle, anybody can perform a search.

“Accounts that actively spread misinformation are significantly more likely to be bots,” revealed Shao. “Social bots play a key role in the spread of fake news. Automated accounts are particularly active in the early spreading phases of viral claims and tend to target influential users.”

Fake news spreads quickly because influential users (usually unwittingly) repost it and, like all viral content, it spreads. Shao suggests curbing social bots as a strategy for mitigating the spread of fake news, but it may be easier said than done.

While many bots spread fake news, others are used to spread real news, so outlawing them altogether inadvertently negates the spread of legitimate information. Furthermore, legislation to stop bots faces the obstacle of international borders.

Sources: ARXIV, MIT Technology Review, Washington Post, factcheck.org
Image Source: Pixabay

Advertisement



Learn more about Electronic Products Magazine

Leave a Reply