AI Is About to Completely Change the Face of Entertainment

Twenty years ago, entertainment was dominated by a handful of producers and monolithic broadcasters, a near-impossible market to break into.

Today, the industry is almost entirely dematerialized, while storytellers and storytelling mediums explode in number. And this is just the beginning.

Netflix turned entertainment on its head practically overnight, shooting from a market cap of US$8 billion in 2010 (the same year Blockbuster filed for bankruptcy) to a record US$185.6 billion only 8 years later. This year, it is expected to spend a whopping 15 billion on content alone.

Meanwhile, VR platforms like Google’s Daydream and Oculus have only begun bringing the action to you, while mixed reality players like Dreamscape will forever change the way we experience stories, exotic environments, and even classrooms of the future.

In the words of Barry Diller, a former Fox and Paramount executive and the chairman of IAC, “Hollywood is now irrelevant.”

In this two-part series, I’ll be diving into three future trends in the entertainment industry: AI-based content curation, participatory story-building, and immersive VR/AR/MR worlds.

Today, I’ll be exploring the creative future of AI’s role in generating on-demand, customized content and collaborating with creatives, from music to film, in refining their craft.

Let’s dive in!

AI Entertainment Assistants

For many of us, film brought to life our conceptions of AI, from Marvel’s JARVIS to HAL in 2001: A Space Odyssey.

And now, over 50 years later, AI is bringing stories to life like we’ve never seen before.

Converging with the rise of virtual reality and colossal virtual worlds, AI has begun to create vastly detailed renderings of dead stars, generate complex supporting characters with intricate story arcs, and even bring your favorite stars—whether Marlon Brando or Amy Winehouse—back to the big screen and into a built environment.

While still in its nascent stages, AI has already been used to embody virtual avatars that you can converse with in VR, soon to be customized to your individual preferences.

But AI will have far more than one role in the future of entertainment as industries converge atop this fast-moving arena.

You’ve likely already seen the results of complex algorithms that predict the precise percentage likelihood you’ll enjoy a given movie or TV series on Netflix, or recommendation algorithms that queue up your next video on YouTube. Or think Spotify playlists that build out an algorithmically refined, personalized roster of your soon-to-be favorite songs.

And AI entertainment assistants have barely gotten started.

Currently the aim of AIs like Google’s Assistant or Huawei’s Xiaoyi (a voice assistant that lives inside Huawei’s smartphones and smart speaker AI Cube), AI advancements will soon enable your assistant to search and select songs based on your current and desired mood, movies carefully picked out to bridge you and your friends’ watching preferences on a group film night, or even games whose characters are personalized to interact with you as you jump from level to level.

Or even imagine your own home leveraging facial technology to assess your disposition, cross-reference historical data on your entertainment choices at a given time or frame of mind, and automatically queue up a context-suiting song or situation-specific video for comic relief.

Curated Content Generators

Beyond personalized predictions, however, AIs are now taking on content generation, multiplying your music repertoire, developing entirely new plotlines, and even bringing your favorite actors back to the screen or—better yet—directly into your living room.

Take AI motion transfer, for instance.

Employing the machine learning subset of generative adversarial networks (GAN), a team of researchers at UC Berkeley has now developed an AI motion transfer technique that superimposes the dance moves of professionals onto any amateur (‘target’) individual in seamless video.

By first mapping the target’s movements onto a stick figure, Caroline Chan and her team create a database of frames, each frame associated with a stick-figure pose. They then use this database to train a GAN and thereby generate an image of the target person based on a given stick-figure pose.

Map a series of poses from the source video to the target, frame-by-frame, and soon anyone might moonwalk like Michael Jackson, glide like Ginger Rogers, or join legendary dancers on a virtual stage.

Somewhat reminiscent of AI-generated “deepfakes,” the use of generative adversarial networks in film could massively disrupt entertainment, bringing legendary performers back to the screen and granting anyone virtual stardom.

Just as digital artists increasingly enhance computer-generated imagery (CGI) techniques with high-fidelity 3D scanning for unprecedentedly accurate rendition of everything from pores to lifelike hair textures, AI is about to give CGI a major upgrade.

Fed countless hours of footage, AI systems can be trained to refine facial movements and expressions, replicating them on any CGI model of a character, whether a newly generated face or iterations of your favorite actors.

Want Marilyn Monroe to star in a newly created Fast and Furious film? No problem! Keen to cast your brother in one of the original Star Wars movies? It might soon be as easy as contracting an AI to edit him in, ready for his next Jedi-themed birthday.

Companies like Digital Domain, co-founded by James Cameron, are hard at work to pave the way for such a future. Already, Digital Domain’s visual effects artists employ proprietary AI systems to integrate humans into CGI character design with unparalleled efficiency.

As explained by Digital Domain’s Digital Human Group director Darren Handler, “We can actually take actors’ performances—and especially facial performances—and transfer them [exactly] to digital characters.

And last weekend, AI-CGI cooperation took center stage in Avengers: Endgame, seamlessly recreating facial expressions on its villain Thanos.

Even in the realm of video games, upscaling algorithms have been used to revive childhood classic video games, upgrading low-resolution features with striking new graphics.

One company that has begun commercializing AI upscaling techniques is Topaz Labs. While some manual craftsmanship is required, the use of GANs has dramatically sped up the process, promising extraordinary implications for gaming visuals.

But how do these GANs work? After training a GAN on millions of pairs of low-res and high-res images, one part of the algorithm attempts to build a high-resolution frame from its low-resolution counterpart, while the second algorithm component evaluates this output. And as the feedback loop of generation and evaluation drives the GAN’s improvement, the upscaling process only gets more efficient over time.

“After it’s seen these millions of photos many, many times it starts to learn what a high resolution image looks like when it sees a low resolution image,” explained Topaz Labs CTO Albert Yang.

Imagine a future in which we might transform any low-resolution film or image with remarkable detail at the click of a button.

But it isn’t just film and gaming that are getting an AI upgrade. AI songwriters are now making a major dent in the music industry, from personalized repertoires to melody creation.

AI Songwriters and Creative Collaborators

While not seeking to replace your favorite song artists, AI startups are leaping onto the music scene, raising millions in VC investments to assist musicians with creation of novel melodies and underlying beats… and perhaps one day with lyrics themselves.

Take Flow Machines, a songwriting algorithm already in commission. Now used by numerous musical artists as a creative assistant, Flow Machines has even made appearances on Spotify playlists and top music charts.

And startups are fast following suit, including Amper, Popgun, Jukedeck, and Amadeus Code.

But how do these algorithms work? By processing thousands of genre-specific songs or an artist’s genre-mixed playlist, songwriting algorithms are now capable of optimizing and outputting custom melodies and chord progressions that interpret a given style. These in turn help human artists refine tunes, derive new beats, and ramp up creative ability at scales previously unimaginable.

As explained by Amadeus Code’s founder Taishi Fukuyama, “History teaches us that emerging technology in music leads to an explosion of art. For AI songwriting, I believe [it’s just] a matter of time before the right creators congregate around it to make the next cultural explosion.”

Envisioning a future wherein machines form part of the creation process, Will.i.am has even described a scenario in which he might tell his AI songwriting assistant, “Give me a shuffle pattern, and pull up a bass line, and give me a Bootsy Collins feel…”

AI: The Next Revolution in Creativity

Over the next decade, entertainment will undergo its greatest revolution yet. As AI converges with VR and crashes into democratized digital platforms, we will soon witness the rise of everything from edu-tainment, to interactive game-based storytelling, to immersive worlds, to AI characters and plot lines created on-demand, anywhere, for anyone, at almost zero cost.

We’ve already seen the dramatic dematerialization of entertainment. Streaming has taken the world by storm, as democratized platforms and new broadcasting tools birth convergence between entertainment and countless other industries.

Posing the next major disruption, AI is skyrocketing to new heights of creative and artistic capacity, multiplying content output and allowing any artist to refine their craft, regardless of funding, agencies, or record deals.

And as AI advancements pick up content generation and facilitate creative processes on the back end, virtual worlds and AR/VR hardware will transform our experience of content on the front end.

In our next blog of the series, we’ll dive into mixed reality experiences, VR for collaborative storytelling, and AR interfaces that bring location-based entertainment to your immediate environment.

Join Me

(1) A360 Executive Mastermind: If you’re an exponentially and abundance-minded entrepreneur who would like coaching directly from me, consider joining my Abundance 360 Mastermind, a highly selective community of 360 CEOs and entrepreneurs who I coach for 3 days every January in Beverly Hills, Ca. Through A360, I provide my members with context and clarity about how converging exponential technologies will transform every industry. I’m committed to running A360 for the course of an ongoing 25-year journey as a “countdown to the Singularity.”

If you’d like to learn more and consider joining our 2020 membership, apply here.

(2) Abundance-Digital Online Community: I’ve also created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is Singularity University’s ‘onramp’ for exponential entrepreneurs — those who want to get involved and play at a higher level. Click here to learn more.

(Both A360 and Abundance-Digital are part of Singularity Universityyour participation opens you to a global community.)

This article originally appeared on diamandis.com. Read the original article here.

Image Credit: Fred Mantel / Shutterstock.com

Peter H. Diamandis, MD
Peter H. Diamandis, MDhttp://diamandis.com/
Diamandis is the founder and executive chairman of the XPRIZE Foundation, which leads the world in designing and operating large-scale incentive competitions. He is also the executive founder and director of Singularity University, a global learning and innovation community using exponential technologies to tackle the world’s biggest challenges and build a better future for all. As an entrepreneur, Diamandis has started over 20 companies in the areas of longevity, space, venture capital, and education. He is also co-founder of BOLD Capital Partners, a venture fund with $250M investing in exponential technologies. Diamandis is a New York Times Bestselling author of two books: Abundance and BOLD. He earned degrees in molecular genetics and aerospace engineering from MIT and holds an MD from Harvard Medical School. Peter’s favorite saying is “the best way to predict the future is to create it yourself.”
RELATED
latest
Don't miss a trend
Get Hub delivered to your inbox

featured