Photorealistic faces in video games have been with us for a while, but they’ve still been a long way from what you’d call realistic, as while we’ve got the polygons and texturing almost perfected, facial animations remain are still stuck in the depths of the uncanny valley.
This isn’t to heap shit on animators and modellers—mimicking the nuances of human facial expressions is clearly very hard work, and teams have come a long from Goldeneye to Half-Life 2 to Yakuza—but it is what it is. Progress is still being made, though, and while we’ve all got used to accepting the fact video game characters can look kinda real until they open their mouths or try and look scared, we are going to get to the point one day where the uncanny valley is bridged, and they simply do look real.
This is one way we might get there. You’re looking at the work of VFX studio Ziva Dynamics, whose Unreal Engine-based examples here are showing off the creation of facial animation tech which is capable of some truly outstanding stuff. As an example, look at the model on the left generated from real-time capture on the right:
This real-time stuff looks fine, excellent even, but also not that much past what we’re used to at the moment. Now look what happens when you run the same model through some terabytes of reference data and pass it by some AI learning:
Holy shit. While Ziva work across the entertainment industry, from TV to movies, this tech has the gaming industry in particular in mind, since it would allow developers to introduce a whole range of facial emotions that are difficult if not impossible to get across at the moment using just expressions.
For a slightly more technical example, here’s an industry-focused trailer from the company, which at the end shows one of the more promising side-effects of the approach being used: that it can work across ethnicities and even (fictional) species: