Algorithms Are Designed to Addict Us, and the Consequences Go Beyond Wasted Time

Goethe’s The Sorcerer’s Apprentice is a classic example of many stories in a similar theme. The young apprentice enchants a broom to mop the floor, avoiding some work in the process. But the enchantment quickly spirals out of control: the broom, mono-maniacally focused on its task but unconscious of the consequences, ends up flooding the room.

The classic fear surrounding hypothetical, superintelligent AI is that we might give it the wrong goal, or insufficient constraints. Even in the well-developed field of narrow AI, we see that machine learning algorithms are very capable of finding unexpected means and unintended ways to achieve their goals. For example, let loose in the structured environment of video games, where a simple function—points scored—is to be maximized, they often find new exploits or cheats to win without playing.

In some ways, YouTube’s algorithm is an immensely complicated beast: it serves up billions of recommendations a day. But its goals, at least originally, were fairly simple: maximize the likelihood that the user will click on a video, and the length of time they spend on YouTube. It has been stunningly successful: 70 percent of time spent on YouTube is watching recommended videos, amounting to 700 million hours a day. Every day, humanity as a collective spends a thousand lifetimes watching YouTube’s recommended videos.

The design of this algorithm, of course, is driven by YouTube’s parent company, Alphabet, maximizing its own goal: advertising revenue, and hence the profitability of the company. Practically everything else that happens is a side effect. The neural nets of YouTube’s algorithm form connections—statistical weightings that favor some pathways over others—based on the colossal amount of data that we all generate by using the site. It may seem an innocuous or even sensible way to determine what people want to see; but without oversight, the unintended consequences can be nasty.

Guillaume Chaslot, a former engineer at YouTube, has helped to expose some of these. Speaking to TheNextWeb, he pointed out, “The problem is that the AI isn’t built to help you get what you want—it’s built to get you addicted to YouTube. Recommendations were designed to waste your time.”

More than this: they can waste your time in harmful ways. Inflammatory, conspiratorial content generates clicks and engagement. If a small subset of users watches hours upon hours of political or conspiracy-theory content, the pathways in the neural net that recommend this content are reinforced.

The result is that users can begin with innocuous searches for relatively mild content, and find themselves quickly dragged towards extremist or conspiratorial material. A survey of 30 attendees at a Flat Earth conference showed that all but one originally came upon the Flat Earth conspiracy via YouTube, with the lone dissenter exposed to the ideas from family members who were in turn converted by YouTube.

Many readers (and this writer) know the experience of being sucked into a “wormhole” of related videos and content when browsing social media. But these wormholes can be extremely dark. Recently, a “pedophile wormhole” on YouTube was discovered, a recommendation network of videos of children which was frequented by those who wanted to exploit children. In TechCrunch’s investigation, it took only a few recommendation clicks from a (somewhat raunchy) search for adults in bikinis to reach this exploitative content.

It’s simple, really: as far as the algorithm, with its one objective, is concerned, a user who watches one factual and informative video about astronomy and then goes on with their day is less advantageous than a user who watches fifteen flat-earth conspiracy videos in a row.

In some ways, none of this is particularly new. The algorithm is learning to exploit familiar flaws in the human psyche to achieve its ends, just as other algorithms find flaws in the code of 80s Atari games to score their own points. Conspiratorial tabloid newspaper content is replaced with clickbait videos on similar themes. Our short attention spans are exploited by social media algorithms, rather than TV advertising. Filter bubbles of opinion that once consisted of hanging around with people you agreed with and reading newspapers that reflected your own opinion are now reinforced by algorithms.

Any platform that reaches the size of the social media giants is bound to be exploited by people with exploitative, destructive, or irresponsible aims. It is equally difficult to see how they can operate at this scale without relying heavily on algorithms; even content moderation, which is partially automated, can take a heavy toll on the human moderators, required to filter the worst content imaginable. Yet directing how the human race spends a billion hours a day, often shaping people’s beliefs in unexpected ways, is evidently a source of great power.

The answer given by social media companies tends to be the same: better AI. These algorithms needn’t be blunt instruments. Tweaks are possible. For example, an older version of YouTube’s algorithm consistently recommended “stale” content, simply because this had the most viewing history to learn from. The developers fixed this by including the age of the video as a variable.

Similarly, choosing to shift the focus from click likelihood to time spent watching the video was aimed to prevent low-quality videos with clickbait titles from being recommended, leading to user dissatisfaction with the platform. Recent updates aim to prioritize news from reliable and authoritative sources, and make the algorithm more transparent by explaining why recommendations were made. Other potential tweaks could add more emphasis on whether users “like” videos, as an indication of quality. And YouTube videos about topics prone to conspiracy, such as global warming, now include links to factual sources of information.

The issue, however, is sure to arise if this conflicts with the profitability of the company in a large way. Take a recent tweak to the algorithm, aimed to reduce bias in the recommendations based on the order videos are recommended. Essentially, if you have to scroll down further before clicking on a particular video, YouTube adds more weight to that decision: the user is probably actively seeking out content that’s more related to their target. A neat idea, and one that improves user engagement by 0.24 percent, translating to millions of dollars in revenue for YouTube.

If addictive content and engagement wormholes are what’s profitable, will the algorithm change the weight of its recommendations accordingly? What weights will be applied to ethics, morality, and unintended consequences when making these decisions?

Here is the fundamental tension involved when trying to deploy these large-scale algorithms responsibly. Tech companies can tweak their algorithms, and journalists can probe their behavior and expose some of these unintended consequences. But just as algorithms need to become more complex and avoid prioritizing a single metric without considering the consequences, companies must do the same.

Image Credit: Wikimedia Commons

Thomas Hornigold
Thomas Hornigoldhttp://www.physicalattraction.libsyn.com/
Thomas Hornigold is a physics student at the University of Oxford. When he's not geeking out about the Universe, he hosts a podcast, Physical Attraction, which explains physics - one chat-up line at a time.
RELATED
latest
Don't miss a trend
Get Hub delivered to your inbox

featured