Post-Truth: Technology Is a Big Part of the Problem, But It’s Also a Solution

“Post-truth” was chosen as the 2016 word of the year by Oxford Dictionaries, a catch-all term describing a perception of growing distrust of “the facts” and increasing reliance on emotion in public discourse.

It is frequently used to describe the politics that underpinned Donald Trump’s successful presidential campaign and also the sentiments that resulted in the British voting to leave the EU. It was leading Brexit campaigner Michael Gove who coined the immortal line, “people in this country have had enough of experts.”

It’s unwise to dismiss the genuine and deep-seated grievances at the heart of the rejection of the establishment’s “evidence-based” narrative, which too often talks down to people rather than explaining and can be blind to its own prejudices. But as many have pointed out, post-truth politics is also a product of dramatic changes to the media landscape.

A report from Pew Research in May found that 62% of Americans get at least some of their news from social media. That’s problematic for a number of reasons, not least because there is evidence that social networks create echo chambers that polarize debate and shut out alternative viewpoints.

As the number of competing narratives gets ever larger, the media’s role as the gatekeeper of the truth is eroded, accelerating the fragmentation of political consensus.

The blame lies partly with users—people have a tendency to reject or avoid facts that don’t conform to their beliefs, and their decisions on which groups to join and what posts to promote certainly shape their experience. But there is also an undeniable algorithmic element.

Internet activist Eli Pariser introduced the concept of the filter bubble in 2010, which refers to the fact that in their efforts to personalize your experience, companies like Google and Facebook can isolate you from opposing perspectives. He explains that when he got two friends to type BP into Google, one got links to investment opportunities, the other links to the Deepwater Horizon spill.

At the same time, technology has made it possible to track reader engagement far more effectively. This has led to a rise in clickbait journalism, where the success of a story is more closely tied to page views than the worth of the content. And this has led to the rise of fake news sites publishing outrageous claims in order to drive web traffic, and in turn, advertising revenue.

But it has also pushed established publications in the same direction as they chase ever-dwindling revenues in a bid to stay afloat. Combined with shrinking newsrooms, this leads to eye-catching trivia at the expense of in-depth reporting. It also encourages sensationalism—often explicitly aimed at specific echo chambers—driving further reader polarization. And ultimately, as the number of competing narratives gets ever larger, the media’s role as the gatekeeper of the truth is eroded, accelerating the fragmentation of political consensus.

But while technology has played a large part in us reaching this point, it could also provide solutions.

Tech firms have been reluctant to acknowledge their role in this process, but after a year where misinformation has taken center stage in American politics, both Facebook and Google have announced plans to tackle fake news sites by cutting off their advertising revenue.

The pair has also teamed up with publications like the New York Times, Washington Post and BuzzFeed, as well as Twitter, to create a coalition designed to promote news literacy among social media users, create a voluntary code of practice, and launch a platform where members can verify questionable stories.

The problem is not simply fake news, though—it’s an emergent property of the way social networks are designed, and it might take more involved solutions to counter it.

Cesar Hidalgo from MIT’s Media Lab suggests that social media networks could introduce high quality articles into peoples’ feeds at random or use algorithms to identify your bias and show you stories from the other end of the political spectrum. EscapeYourBubble is a Chrome extension that does just that.

With reams of text flowing through the internet every day, computers may be the only things capable of keeping up with the sheer volume.

Fact-checking became a big part of the recent presidential election, but writing in Tech Crunch, Chris Nicholson, co-founder of Skymind and Deeplearning4, argues that machine learning could help to automate the process. Improvements in natural-language processing powered by deep learning mean we can now detect all kinds of patterns in text, so why not detect them in truthfulness too?

With reams of text flowing through the internet every day, computers may be the only things capable of keeping up with the sheer volume. That’s why groups like PolitiFact and Full Fact are building tools designed to analyze content on TV, social media and websites in real-time 24 hours a day, and they say functional products are months rather than years away.

The problem with truthfulness, though, is that there are considerable gray areas that can be hard enough to navigate for human fact checkers. Programming that kind of nuance into even the most sophisticated software will be a serious challenge, and brings with it the specter of all-too-human assumptions sneaking into the code.

This highlights a major area of concern for the companies in the crosshairs who have long championed the supposed neutrality of the internet and its democratization of information. As Mark Zuckerberg wrote in his Facebook post announcing the company’s efforts to tackle fake news, “We do not want to be arbiters of truth ourselves.”

But this is at odds with the company’s increasingly central role in the distribution of media. Whether they like it or not, decisions made by Facebook are having an instrumental effect on what news people see. Sooner or later they will have to own up to the fact that they are, in fact, a media company and not a technology company.

Internet giants are not the only ones who need to do some soul searching, though. Filter bubbles and echo chambers are not new phenomena; they’re just a digital manifestation of humans’ long-standing tendency towards tribalism. The volume of misinformation and bias churned out by the internet certainly accelerates and exacerbates the problem, but research suggests that, at its heart, it is a human-driven process.

Technology companies undoubtedly need to do more to counteract the negative influence they are having on the public discourse, but technology cannot provide a panacea for human nature. Ultimately, we are the only ones with the power to broaden our own perspective.

Image Credit: Shutterstock

Edd Gent
Edd Genthttp://www.eddgent.com/
Edd is a freelance science and technology writer based in Bangalore, India. His main areas of interest are engineering, computing, and biology, with a particular focus on the intersections between the three.
RELATED
latest
Don't miss a trend
Get Hub delivered to your inbox

featured