Disney’s Exquisite Digital Eyes Bring Avatars to Life

When you meet someone, where does your gaze first fall? Usually, you’ll find you look for their eyes. And maybe this is partly why some digital characters can seem alien—their eyes are approximations of the real thing, and our brain revolts.

To make photorealistic digital characters, even minute details matter.

But until recent years, we didn’t have the technology to take in every detail. Or rather, the required cost and time were too much to warrant the investment. However, due to advanced scanning and modeling software and powerful computers, that’s changing, and digital characters are progressively exiting the uncanny valley.

As you might expect. The entertainment industry has the greatest incentive to advance digital characters. A recent Disney research project, for example, is modeling the eye with extremely high fidelity.

“Creating a photo-realistic digital human is one of the grand challenges of computer graphics,” said Pascal Bérard, a Ph.D. student in computer graphics at Disney Research Zurich and ETH Zurich. However, while researchers have focused intently on reconstructing skin and features, the eye has so far been neglected.

In a recent paper, BĂ©rard and his coauthors say the shapes of digital characters’ eyes are often approximated by fusing two spheres—one for the sclera (the larger eyeball) and another for the cornea (the transparent lens). The colored iris is modeled as a flat disc or simple cone. Movements are likewise flat.

Digitally reconstructed eye.
Digitally reconstructed eye.

In practice, however, the eyes are far from platonic ideals. The sclera is no perfect sphere. Its surface is pocked with small imperfections and indentations. Each iris is exquisitely textured and as distinctive as a finger print. The cornea and iris assume myriad shapes as the eye shifts focus and reacts to changing light. The researchers write:

“The human eye is a heterogeneous compound of opaque and transparent surfaces with a continuous transition between the two, and even surfaces that are visually distorted due to refraction.”

We might not consciously notice all these details when talking to a friend, but our brain tallies up the total effect and knows when something’s off.

So, how do the Disney researchers work their magic? Divide and conquer.

They begin with the sclera, move on to the cornea, and finish with the iris. The subject (in this case an actor) lies down, their head, supported by a headrest, as still as possible. They’re surrounded by a halo of cameras, modified flashes, and colored LEDs to reflect off the cornea. The capture process takes about 20 minutes.

Once recorded, the raw data is stitched together by specialized algorithms running on a standard Windows PC. Reconstructing the whole eye takes about four hours.

An acr
An actor’s eye next to its digital reconstruction.

The team reconstructed nine different eyes from six actors, and were struck by how distinct the shape, coloring, and deformation of each eye was. One application for their eyes might be in a photorealistic mirror of an actor’s face. But the team also show an image of an artistically enhanced face—one with midnight blue skin.

“Such a result would traditionally take significant artistic skill and man-hours to generate, in particular if it was meant to closely resemble a real actor,” Bérard said. “Our result was created with very little effort.”

While the method reconstructs a motionless eye with a dynamic iris, they think next steps include closely modeling the complex motions of the entire eye. And more realistic eyes will, of course, be combined with more realistic faces and bodies.

Beyond obvious applications in film characters, photorealistic avatars might find their way into video games and virtual reality. Much further down the line, those interacting in virtual worlds might go to a studio to build a photorealistic avatar.

Indeed, Philip Rosedale, creator of the virtual world Second Life, is working on a sequel called High Fidelity. High Fidelity aims, in part, to use sensors to record and recreate physical motions—like eye movement and facial expressions—in silica.

High Fidelity’s sensors paired with realistic avatars might offer a more immersive, intuitive virtual experience.

Further, future AI-powered digital assistants might inhabit digital bodies and interact with us online or out and about. Well-rendered avatars seem (to me at least) more feasible than physical AI-embodiment in sci-fi androids. Meanwhile, AI-driven robots are free to take myriad forms, and driven more by function than form—they need look nothing like us.

Also, it’s important to note that we humans are perfectly comfortable with robots and avatars that don’t try to look too realistic. The uncanny valley is when we try to make something look human, but just miss, and the result is creepy. (Check out this uncanny valley video tutorial to learn more about almost realistic recreations gone wrong.)

The future will likely see all kinds of digital characters and avatars and robots taking up residence on either side of the uncanny valley—but (hopefully) very few will spend much time in its depths.

Image Credit: DisneyResearchHub/YouTube

Jason Dorrier
Jason Dorrier
Jason is editorial director of Singularity Hub. He researched and wrote about finance and economics before moving on to science and technology. He's curious about pretty much everything, but especially loves learning about and sharing big ideas and advances in artificial intelligence, computing, robotics, biotech, neuroscience, and space.
RELATED
latest
Don't miss a trend
Get Hub delivered to your inbox

featured