This Startup Is Training AI to Gobble Up the News and Rewrite It Free of Bias

Bias in journalism is nothing new, but there are growing concerns technology is pushing us into echo chambers where we only hear one side of the story. Now a startup says it’s using AI to bring us a truly impartial source of news.

Knowhere launched earlier this month, alongside an announcement that it had raised $1.8 million in venture capital. The site uses AI to aggregate news from hundreds of sources and create three versions of each story: one skewed to the left, one skewed to the right, and one that’s meant to be impartial.

Natural language processing algorithms trawl through more than a thousand news sources to identify popular stories, the company told Motherboard. It analyzes these stories for narrative, facts, and bias and uses the resulting database to put together three versions of the story. In non-political stories the categories are impartial, positive, and negative.

These stories can be written in anywhere from 1 to 15 minutes, depending on how much disagreement there is between sources and human editors, who then give the article a once-over before it goes live.

The company says its aim is to get machines to do what humans can’t: sift through the flood of stories written about major events to distill the most salient facts and narratives.

“We are practicing a form of journalism that overcomes information overload and its resulting silos, attempting to reconcile the many different narratives spun out of every story, and taking our first steps towards a truly comprehensive and comprehensible source of record for all,” co-founder and chief editor Nathaniel Barling said in a statement.

The company isn’t the first to suggest we may need to enlist machines in our efforts to wade through the misinformation and bias of the post-truth era. The 2016 US presidential election brought the term “fake news” into the public consciousness, and much of the effort in this area so far has been on weeding out deliberately deceptive articles on the internet.

Facebook has been trialing a series of tools that highlight potential fake news stories and links to articles disputing them. Google executives also reportedly floated the idea of a browser extension that could flag suspicious material on Facebook’s or Twitter’s websites at the World Economic Forum meeting in Davos in February.

Now AI is being enlisted in the battle too, according to MIT Tech Review. Startup AdVerif.ai is building algorithms to help content platforms detect false stories, and volunteers in the AI community have started the Fake News Challenge, a competition designed to spur the development of machine learning tools for fact-checking.

This may be harder than it sounds, though. As Martin Robbins writes in the Guardian, nailing down what counts as fake news isn’t always easy. Checking claims about numbers against publicly available datasets is one thing, but trying to uncover the truth in murky, behind-closed-door political dealings may not be as simple.

The problem Knowhere is dealing with is even more complicated and nuanced. Even if the facts in a story are true, the way they are presented can be tweaked to support a particular agenda, and pretty much all news organizations are guilty of this to varying degrees.

Even with the best will in the world, it’s hard to avoid, because so much of what journalists write about is subjective. It’s not possible to have a completely unbiased opinion about politics or religion, and a writer or editor’s beliefs will undoubtedly seep into their copy.

The company’s hope is that by taking a broad sample of news sources all biased to different extents, they can identify a middle way. They eventually plan to do away with the three versions and simply publish the impartial one.

But it may not be that simple. For a start, Knowhere admitted to Motherboard that the founders have weighted the trustworthiness of sources manually based on their reputation for accuracy. And when machine-written stories are passed to editors, they not only check for errors and style, but also look for signs of bias in the impartial versions of stories. These edits are then used to further train the algorithms.

It’s well-established that AI has a strong tendency to pick up human biases, and even subtle signals can have significant impact when amplified across thousands of training examples. Human decisions about trustworthiness and impartiality can’t help but be subjective, and while any bias may be subtle, if they appear under the banner “impartial” they could be insidious.

A potentially more promising approach may be to simply broaden people’s exposure to competing narratives and let them make up their own minds—effectively, remove the “impartial” version from Knowhere’s stories and just show readers the takes from left and right.

Last year, Finnish and Italian researchers developed an algorithm that could detect the political leanings of social media users and present them with opposing views. The Chrome extension EscapeYourBubble does something similar by inserting news stories into a user’s Facebook news feed that expose them to opposing views.

Unfortunately, most people are very comfortable with their biases. It takes a particularly open-minded person to install a service that deliberately challenges your views, and social media sites aren’t likely to impose content that users find controversial out of fear of driving them away.

Ultimately, and perhaps unsurprisingly, it may be hard to find simple technological fixes to the very human problems of misinformation and bias.

Image Credit: Phonlamai Photo / Shutterstock.com

Edd Gent
Edd Genthttp://www.eddgent.com/
Edd is a freelance science and technology writer based in Bangalore, India. His main areas of interest are engineering, computing, and biology, with a particular focus on the intersections between the three.
RELATED
latest
Don't miss a trend
Get Hub delivered to your inbox

featured