Resurrecting the Dead—Bringing Back the Dearly Departed Digitally

Progress in photorealistic 3D avatar technology, voice synthesis, augmented reality and artificial intelligence, at large, should make it possible within 30 years to bring the dearly departed back to life.

Let’s first agree on the terminology. Of course, there’s no such thing as resurrecting someone for real, but if we limit ourselves to the way someone looks, acts, walks, talks, reacts, laughs and other visual and sound-related traits, then we might not be that far from the day we can digitally revive the dead (or duplicate any living person for that matter).

Generating a Photorealistic 3D Avatar

Digital artists can already create an almost photorealistic digital 3D avatar out of any deceased person, provided there are enough pictures and video materials available. This is how Tupac’s sort-of hologram avatar was created for the 2012 Coachella music festival.

In 2013, a team from the University of Southern California’s Institute for Creative Technologies disclosed a process whereby a human face can be scanned by a set of digital cameras and photorealistically reconstructed in real time onscreen. The result is mind-blowing and even scary.

It definitely speaks for itself, you can watch it here.

These first two examples, give impressive results but are still quite time consuming and expensive. The USC team went further down the road of affordability and found a way last year to reconstruct someone’s body onscreen in real time with a $100 Microsoft Kinect camera. The avatar is assigned the usual human degrees of freedom and given the ability to move, run and jump, as can be seen here.

Given the state of the art and current rate of improvement, it’s quite likely that within 30 years a few pictures of someone will suffice to recreate a 3D avatar so realistic that it will be hard (if not impossible) to tell who’s who.

And given enough video materials of someone, deep learning algorithms will be able to one day dissect each body movement and understand the way that person laughs, sneezes, smiles, stands, walks, and so on, in order to make the avatar behave the same in relevant situations. This has to do with computer vision technology which is improving really fast: AI can now recognize cats in pictures among many another concepts (don’t miss the Ted talk “How we’re teaching computers to understand pictures”).

When data is missing, relatives will be able to fill the void by simply trying to impersonate their departed loved one by mimicking specific gestures: their moves will be recorded and applied to the avatar in appropriate circumstances. This has been done for quite some time already in movies. Watch, for instance, how Gollum’s behavior is generated from Andy Serkis’s acting in the Lord of the Rings and Hobbit films.

But this in itself is not enough to talk of resurrection, even given our aforementioned definition. A photoreal avatar that can move in a credible way has yet to be endowed with the proper tone of voice and personality to be worthy of our full consideration.

Voice Synthesis

After 30 years of research and development, the European company Acapela has developed software to recreate your voice with its particular tone. All you need to do is recite a list of 1500 sentences. This can be done from your laptop over the internet using a basic headset. The tech is aimed at patients about to lose their voice and who would rather keep using their own voice rather than a robotic one.

I’ve tried it, and it’s a jaw-dropping experience to hear yourself speaking text you’ve never read! (Learn more by clicking here.) The result isn’t yet perfect, and the intonation isn’t always right, but the potential is clear: We’re headed towards a world where your voice could be used to say anything in any language—and could, of course, eventually be used in a digitally resurrected avatar. (On the other hand, it also means we’ll have to make double sure we know who we’re on the phone with!)

Back to our quest: Getting a 3D avatar to talk smoothly using a synthesized voice is quite a feat in terms of lip and tongue synchronization. Microsoft Research has been doing well in this regard since at least 2011. Using a Kinect and a normal camera, researchers film a human guinea pig while reciting text for 20 minutes. Speech is then broken into phonemes tied to the corresponding lip/tongue movements, and a photoreal face is reconstructed on screen. Using machine learning, the face can be made to say any text in real time, with lips and tongue moving realistically to generate the flow of words (see this video, starting at 1:10).

Again, it’s likely that 30 years from now, it will be possible to recreate a voice perfectly from a few minutes of recording—this will give an avatar of you or a deceased loved one the ability to speak.

Using AI to Bring Back Personality

Tremendous progress in AI will allow us to create an avatar that talks and reacts just as we would have.

The AI will need to be fed as much information as possible about the person to be resurrected: What she’s read, seen, done, learned, listened to, watched, searched, where’s she’s been, who she’s met, talked to, and so on. The more information, the more accurate the emulation. Services such as Facebook or Google which know a whole lot about us will be in a prime position to supply this data.

That information will be paired to AI’s future ability to “understand” human language well enough to hold a discussion.

In 2011, IBM’s Watson program defeated the best human players at Jeopardy (see video), a game which requires to understand double meanings and the subtlety of human spoken language. Watson is now used for medical purposes and is better at diagnosing cancer than doctors with years of experience.

Ray Kurzweil, a director of engineering at Google, is known for the accuracy of his futuristic predictions, many of which have come true (Wikipedia). He was quoted saying in 2014: “In 2029, computers will be more intelligent than we are and will be able to understand what we say, learn from experience, make jokes, tell stories and even flirt.”

Again, given the rate of progress, it’s quite likely that before 30 years AI will be able to interact with us in a very natural and human way. With enough data from a deceased person, AI will eventually be able to emulate her personality almost perfectly and say things she could have credibly uttered. AI will be able to understand context, sense who’s around, and talk about specific topics accordingly (i.e., we don’t talk about the same things and behave the same with all of our relatives).

If anything, the challenge will be to make sure AI doesn’t lose credibility by being too smart or too knowledgeable. Because my late brother had read a book, for example, doesn’t mean he remembered each and every sentence of it.

Entrepreneur Martine Rothblatt has created a talking head that looks like her wife and loaded it with her “memories, thoughts and feelings.” Though still a little creepy looking, perhaps, the head is able to entertain a conversation. The video:

It’s still a prototype prone to gibbering, but Rothblatt says, “Such functional mind clones are 10-20 years away. Am I breaking a law of physics here? Am I talking about defying gravity here? No. Am I talking about going faster than light? No. All I am doing here is talking about writing some good code.”

AI will get it wrong sometimes and say unexpected things, but we’ll be able to give feedback, and over time, it could reach near-perfection and become basically indistinguishable from the real person.

Augmented Reality

Now, what about projecting that digital photoreal avatar of a late loved one into the real world beyond screens? This is the promise of augmented reality, a field which is just about to boom.

Microsoft may lead the way with its HoloLens device. With HoloLens, you’ll be able to see a digital toucan land on your armchair for instance (see animation). The April 2015 demo shows perfectly how avatars and other virtual objects could be added seamlessly to real life (see the 2 min video).

Bringing Loved Ones Back to Life Digitally…

So, consider this: We may live long enough to experience a 3D photoreal digital avatar of a loved one we lost, able to move in front of us and converse with us in a perfectly realistic manner. The more pics, videos, audio recordings and information about her, the better the rendering. This won’t bring anyone’s soul or body back for real, but heck, I’d argue it’s still a better way to remember our departed ones than what we have now.

And beyond talk of resurrection, these technologies promise to utterly disrupt the way we communicate and are entertained. So, let’s fasten our seatbelts, and enjoy the ride!

Thomas Jestin is a technophile and entrepreneur based out of Singapore. He’s the cofounder of KRDS, a French social media marketing agency and Facebook marketing partner.

Find him on twitter: @thomasjestin

Image Credit: sinefxcom/YouTube

Don't miss a trend
Get Hub delivered to your inbox