How Fiction Becomes Fact on Social Media

Hours after the Las Vegas massacre, Travis McKinney’s Facebook feed was hit with a scattershot of conspiracy theories. The police were lying. There were multiple shooters in the hotel, not just one. The sheriff was covering for casino owners to preserve their business.

The political rumors sprouted soon after, like digital weeds. The killer was anti-Trump, an “antifa” activist, said some; others made the opposite claim, that he was an alt-right terrorist. The two unsupported narratives ran into the usual stream of chatter, news and selfies.

“This stuff was coming in from all over my network of 300 to 400” friends and followers, said Mr. McKinney, 52, of Suffolk, Va., and some posts were from his inner circle.

But he knew there was only one shooter; a handgun instructor and defense contractor, he had been listening to the police scanner in Las Vegas with an app. “I jumped online and tried to counter some of this nonsense,” he said.

In the coming weeks, executives from Facebook and Twitter will appear before Congressional committees to answer questions about the use of their platforms by Russian hackers and others to spread misinformation and skew elections. During the 2016 presidential campaign, Facebook sold more than $100,000 worth of ads to a Kremlin-linked company, and Google sold more than $4,500 worth to accounts thought to be connected to the Russian government.

Agents with links to the Russian government set up an endless array of fake accounts and websites and purchased a slew of advertisements on Google and Facebook, spreading dubious claims that seemed intended to sow division all along the political spectrum — “a cultural hack,” in the words of one expert.

Yet the psychology behind social media platforms — the dynamics that make them such powerful vectors of misinformation in the first place — are at least as important, experts say, especially for those who think they’re immune to being duped. For all the suspicions about social media companies’ motives and ethics, it is the interaction of the technology with our common, often subconscious psychological biases that make so many of us vulnerable to misinformation, and this has largely escaped notice.

Skepticism of online “news” serves as a decent filter much of the time, but our innate biases allow it to be bypassed, researchers have found — especially when presented with the right kind of algorithmically selected “meme.”

At a time when political misinformation is in ready supply, and in demand, “Facebook, Google, and Twitter function as a distribution mechanism, a platform for circulating false information and helping find receptive audiences,” said Brendan Nyhan, a professor of government at Dartmouth College.

For starters, said Colleen Seifert, a professor of psychology at the University of Michigan, “People have a benevolent view of Facebook, for instance, as a curator, but in fact it does have a motive of its own. What it’s actually doing is keeping your eyes on the site. It’s curating news and information that will keep you watching.”

That kind of curating acts as a fertile host for falsehoods by simultaneously engaging two predigital social-science standbys: the urban myth as “meme,” or viral idea; and individual biases, the automatic, subconscious presumptions that color belief.

The first process is largely data-driven, experts said, and built into social media algorithms. The wide circulation of bizarre, easily debunked rumors — so-called Pizzagate, for example, the canard that Hillary Clinton was running a child sex ring from a Washington-area pizza parlor — is not entirely dependent on partisan fever (though that was its origin).

For one, the common wisdom that these rumors gain circulation because most people conduct their digital lives in echo chambers or “information cocoons” is exaggerated, Dr. Nyhan said.

In a forthcoming paper, Dr. Nyhan and colleagues review the relevant research, including analyses of partisan online news sites and Nielsen data, and find the opposite. Most people are more omnivorous than presumed; they are not confined in warm bubbles containing only agreeable outrage.

But they don’t have to be for fake news to spread fast, research also suggests. Social media algorithms function at one level like evolutionary selection: Most lies and false rumors go nowhere, but the rare ones with appealing urban-myth “mutations” find psychological traction, then go viral.

There is no precise formula for such digital catnip. The point, experts said, is that the very absurdity of the Pizzagate lie could have boosted its early prominence, no matter the politics of those who shared it.

“My experience is that once this stuff gets going, people just pass these stories on without even necessarily stopping to read them,” Mr. McKinney said. “They’re just participating in the conversation without stopping to look hard” at the source.

Digital social networks are “dangerously effective at identifying memes that are well adapted to surviving, and these also tend to be the rumors and conspiracy theories that are hardest to correct,” Dr. Nyhan said.

One reason is the raw pace of digital information sharing, he said: “The networks make information run so fast that it outruns fact-checkers’ ability to check it. Misinformation spreads widely before it can be downgraded in the algorithms.”

The extent to which Facebook and other platforms function as “marketers” of misinformation, similar to the way they market shoes and makeup, is contentious. In 2015, a trio of behavior scientists working at Facebook inflamed the debate in a paper published in the prominent journal Science.

The authors analyzed the newsfeeds of some 10 million users in the United States who posted their political views, and concluded that “individuals’ choices played a stronger role in limiting exposure” to contrary news and commentary than Facebook’s own algorithmic ranking — which gauges how interesting stories are likely to be to individual users, based on data they have provided.

Outside critics lashed the study as self-serving, while other researchers said the analysis was solid and without apparent bias.

The other dynamic that works in favor of proliferating misinformation is not embedded in the software but in the biological hardware: the cognitive biases of the human brain.

Purely from a psychological point of view, subtle individual biases are at least as important as rankings and choice when it comes to spreading bogus news or Russian hoaxes — like a false report of Muslim men in Michigan collecting welfare for multiple wives.

For starters, merely understanding what a news report or commentary is saying requires a temporary suspension of disbelief. Mentally, the reader must temporarily accept the stated “facts” as possibly true. A cognitive connection is made automatically: Clinton-sex offender, Trump-Nazi, Muslim men-welfare.

And refuting those false claims requires a person to first mentally articulate them, reinforcing a subconscious connection that lingers far longer than people presume.

Over time, for many people, it is that false initial connection that stays the strongest, not the retractions or corrections: “Was Obama a Muslim? I seem to remember that….”

In a recent analysis of the biases that help spread misinformation, Dr. Seifert and co-authors named this and several other automatic cognitive connections that can buttress false information.

Another is repetition: Merely seeing a news headline multiple times in a newsfeed makes it seem more credible before it is ever read carefully, even if it’s a fake item being whipped around by friends as a joke.

And, as salespeople have known forever, people tend to value the information and judgments offered by good friends over all other sources. It’s a psychological tendency with significant consequences now that nearly two-thirds of Americans get at least some of their news from social media.

“Your social alliances affect how you weight information,” said Dr. Seifert. “We overweight information from people we know.”

The casual, social, wisecracking nature of thumbing through and participating in the digital exchanges allows these biases to operate all but unchecked, Dr. Seifert said.

Stopping to drill down and determine the true source of a foul-smelling story can be tricky, even for the motivated skeptic, and mentally it’s hard work. Ideological leanings and viewing choices are conscious, downstream factors that come into play only after automatic cognitive biases have already had their way, abetted by the algorithms and social nature of digital interactions.

“If I didn’t have direct evidence that all these theories were wrong” from the scanner,” Mr. McKinney said, “I might have taken them a little more seriously.”

Credits: The New York Times

Cuatro militares murieron en Antioquia

El Ejército Nacional por medio de un comunicado confirmó la muerte de cuatro soldados en Cáceres, departamento de Antioquia....

Prima de Navidad, ¿cuál es la fecha límite de pago?

Además de compartir momentos en familia durante esta época festiva, los empleados recibirán la esperada prima navideña, que para muchos...

Cauca lidera las cifras de reclutamiento de menores

De acuerdo con cifras de la Defensoría del Pueblo, en lo que va del año se han presentado 297...

Síguenos en:

Artículo Relacionado

La Banda Sinfónica del Huila abre su temporada de conciertos navideños

El recorrido de la temporada de conciertos navideños de la Banda Sinfónica del Huila llegará a 10 municipios....

Tumaco, Nariño, celebra su Festival Internacional del Currulao

A partir de hoy hasta el 8 de diciembre Tumaco, Nariño, será el escenario del Festival Internacional de...

“Navidad Centenaria” en Algeciras: colores, luces y alegría

El municipio de Algeciras da la bienvenida a la navidad con la ‘Ruta Navidad Centenaria’, donde comunidad y...

Humberto Montealegre Sánchez revela su estudio sobre la esclavitud en el Huila

Este 5 de diciembre, la Biblioteca Departamental Olegario Rivera de Neiva será el escenario de una serie de...