News

Actions

New study explains why fake news is perceived as real on Twitter

Posted at 3:45 PM, Mar 22, 2017
and last updated 2017-03-22 15:45:03-04

The Twitter logo at the company’s headquarters in San Francisco, California. (JOSH EDELSON/AFP/Getty Images)

By Ese Olumhense

And the biggest indicator that a viral tweet is a hoax is…

You’re most likely to be duped by a rumor or hoax on Twitter if the message shared has many retweets, a study published Wednesday found.

Lots of retweets serve as a sort of “normative cue” for users to both trust and further share the info, Hyegyu Lee, PhD, and Hyun Jung Oh, PhD, researchers at two South Korean universities, found. And frequently retweeted tweets are more trusted when labeled “news” instead of “rumor.”

“Regardless of the label of the information, message believability and intention to share were stronger for a tweet with a high number of retweets,” the researchers wrote. The “presumption that a message is believable,” they found, is strongly associated with how many retweets it gets.

The researchers arrived at their conclusion by assessing the responses of more than 600 participants to a random series of four tweets, two of them labeled “news” and two labeled “rumor.” In each category, news or rumor, one of the two tweets presented had many retweets, the other had far fewer. Participants were later asked to evaluate the accuracy or believability of the messages. For both categories, seeing many retweets had a “significant impact” on participants’ trust in the information’s credibility, which, in turn, drove them to share it.

If all your friends jump off a bridge, would you jump too?

The conclusion isn’t a novel one. A tweet going viral suggests it is a popular or accepted piece of information, one that others have reviewed and probably vetted. The researchers’ findings reinforce the “informational cascade” theory, first introduced by Cass R. Sunstein, founder and director of the program on behavioral economics and public policy at Harvard Law School.

Characterized simply, an informational cascade is what happens when someone observes the behavior/actions of others and engages in the same, disregarding their own knowledge to do so.

Parents and other fans of the question “If all your friends jump off a bridge, would you jump too?” have cautioned (in far simpler, more guilt-inducing terms) against the dangers of informational cascades, but as users get more and more news via the web and social media, with less and less context, the influence of groupthink can be substantial.

Fake news was a major concern for some Americans during the months leading up to the 2016 presidential election. Most of the misleading or outright false stories shared on Facebook in the final three months of the campaign season saw more engagement than the leading content from major, more traditional print and digital outlets, “easily” outpacing them, one analysis showed. Predatory publishers capitalized on this, driving significant traffic to their sites with hoax articles on a bevy of subjects. At times, the viral misinformation bordered on violent, teasing civil war or threatening riots if a particular candidate won or lost.

Some members of the press, after the election, blamed Facebook for President Donald Trump’s win, claiming its lucrative advertising model enabled nefarious actors to sway popular political opinion, even when they lived outside the United States.

Safeguarding yourself from automated rumor mills

The latest glimmer of understanding into social media behavior and rumor dissemination, the study comes at an opportune time. Twitter is embarking on a concerted effort to eradicate “bot” accounts from its platform, which automatically post tweets that, with the help of other bots, often go viral. Researchers have previously shown that these bots favored Trump 5:1 over Hillary Clinton by Election Day, and their conclusion, combined with those from the just-published research, lends credibility to the claims that fake news helped Trump win.

Legions of bots can publish dozens of tweets per account, per day, and coordinated networks of them can both inflate a user’s follower count, and, when weaponized, amplify misinformation.

This kind of choreography is impressive, but users can protect themselves. Rather than wait on tech companies to find and implement effective solutions (as hoaxers grow their armies), experts advise more carefully reading and vetting information shared online, rather than simply taking viral information at face value. This, they say, could jam these automated rumor mills.

“Whether by word-of-mouth, email, or Twitter, rumors proliferate quite easily,” said Brenda K. Wiederhold, PhD, editor-in-chief of Cyberpsychology, Behavior, and Social Networking, the journal in which the new study was published. “We must remain adamant in our use of critical-thinking skills to evaluate information and avoid equating popularity with plausibility.”