Tech Spread Misinformation in 70 Countries This Year. How Can We Make 2020 Better?

Following discussions about the societal influence of a technology like the internet feels much like watching a tennis match. One side serves with ‘the internet is the greatest tool of enlightenment ever!’ The opposition counters with a baseline drive of ‘we’re drowning in data and misinformation that’s leading us into a new dark age!’ The pro-internet side scrambles, and with outstretched racket and legs akimbo manages to return with a ‘without the internet we wouldn’t be connected through the likes of social media!’ The anti-internet side is ready at the net. It raises not just its racket but a new report from Oxford University’s Internet Institute (OII). ‘Read this and weep!’ it screams, triumphantly, before sending a thumping smash straight at the pro-Internet side.

Let’s abandon the tennis match for a while and zoom in on the report, named “The Global Disinformation Order: 2019 Global Inventory of Organised Social Media Manipulation.” Its pages, detailing how governments, political parties, and organizations use technology to spread misinformation, are a somber read. The report describes how algorithms and big data, among other tools, have been used to spread untruths and outright lies in 70 countries during 2019. The number of countries where misinformation is spread through such technologies has doubled in just two years.

During the same span of time, the likes of deepfakes have risen to prominence. Luckily, the same applies to technology-based ways of spotting and countering misinformation. As 2019 nears its end, the question becomes if either side looks set to claim victory in 2020.

Technology Drives Misinformation

According to the report, 45 democracies and 26 authoritarian states have seen the use of technology to propagate misinformation during 2019. The goals vary, covering everything from garnering voter support to altering or suppressing public opinion, and in some cases inciting violence.

Exerting digital influence over foreign countries, primarily through Facebook and Twitter, is on the rise. Much of the activity can be linked to Russia and China, with the latter emerging as a major player in the global disinformation order, for example in connection with the recent Hong Kong uprising.

It’s no surprise that the authors behind the report describe the problem as “a critical threat to democracy,” with Samantha Bradshaw, lead author of the report, stating,

[…] Although social media was once heralded as a force for freedom and democracy, it has increasingly come under scrutiny for its role in amplifying disinformation, inciting violence, and lowering trust in the media and democratic institutions.”

Furthermore, new kinds of misinformation and misinformation tools are appearing. The most well-known example is likely deepfake videos, such as the one of Nancy Pelosi, which made her appear to slur her words. The number of deepfakes online has increased by 84 percent in less than a year. At the same time, AI is being used to spread misinformation faster than ever before.

The consequences of misinformation are highlighted by numerous events, including the 2016 US presidential elections and the plight of the Rohingya in Myanmar. In other instances, it has created uncertainty about issues where an overwhelming majority of experts agree on the facts, such as vaccines and climate change.

Technology to the Rescue?

The above points to a simple conclusion: technology is to blame for the spread of misinformation, and it’s making the creation of misinformation cheaper.

However, it misses a central point. While technology may well be a contributor to the rise of misinformation, it is also at the core of how we are fighting it.

Perhaps AI is the best example. With the vast increase in news coverage, humans need help detecting when a story is untrue. Researchers at the University of Washington and the Allen Institute for AI (AI2) have developed an AI model to do just that. It has proven able to identify fake news with 92 percent accuracy. Other AI systems have proven skilled at detecting deepfake images and videos.

Blockchain is another technology seeing a lot of activity. Both startups and venerable news organizations like the New York Times are building blockchain-based solutions to combat the rise of misinformation. Blockchain’s ability to create permanent, indelible records in a distributed network is among its unique strengths when in relation to verifying and creating trust in information.

More low-tech solutions are also on the drawing board. Earlier this summer, the BBC convened a meeting with publishers and big tech to look at how they could work together to tackle the rise of fake news. Among the proposed solutions were early warning systems, media education, and shared learning.

The Common Solution

As 2019 trundles towards its last two months, it’s time to flick back to the tennis match. The pause button is still pressed, giving us time to look at a 2017 Pew survey of technologists, scholars, practitioners, and strategic thinkers. They were asked about whether the state of misinformation is going to improve in the next ten years. The two sides were evenly balanced (51 percent believed it won’t improve, 49 percent that it will).

Those who believe things will get worse focus on the ability of fake news to prey on deep human instincts, and how new digital tools make it easier to take advantage of peoples’ preference for comfort and convenience, thereby reinforcing echo chambers.

On the other hand, technology can also help fix these problems. Experts who believe that things will improve focused on how the speed, reach, and efficiency of the internet, apps, and platforms can be harnessed to combat fake news and misinformation.

Both sides are, to some extent, missing one important factor: people.

Previous studies have shown that we are generally very good at judging whether something is real or fake. Technology can amplify this ability through solutions like crowdsourcing the verification of information.

So, as we press play on the tennis match, it seems set that the smash will be returned, and the back and forth over technology and misinformation will continue well into 2020. However, unlike the tennis match, it also seems clear that we, the audience, can play an active role in deciding the outcome.

Image Credit: Photo by Rami Al-zayat on Unsplash

Marc Prosser
Marc Prosser
Marc is British, Danish, Geekish, Bookish, Sportish, and loves anything in the world that goes 'booiingg'. He is a freelance journalist and researcher living in Tokyo and writes about all things science and tech. Follow Marc on Twitter (@wokattack1).
RELATED
latest
Don't miss a trend
Get Hub delivered to your inbox

featured