The quest to develop a system for animating and rendering completely realistic human faces takes another step forward with the development of a new performance capture technique from the Digital Human League.
Under the guidance of Paul Debevec, the computer graphics researcher whose groundbreaking papers on image-based modelling and lighting helped create the visual effects in The Matrix, The University of Southern California's ICT Graphics Laboratory has arguably become the world's most respected pioneer in the field of photorealistic animation and rendering.
Its work on high dynamic range imaging has changed the way 3D artists capture and utilise real-world lighting information, while its Light Stage technology is now a staple tool for many visual effects studios when building digital doubles – even recently making an appearance at The White House for the creation of a 3D bust of Barack Obama.
Now, following the news that Debevec and a number of other VFX luminaries are working together under the Digital Human League banner to push forward realism for digital humans, comes a method for driving high resolution facial skins directly using video performance capture.
The process involves first creating a high resolution model of an actor's face, using one or more high-quality geometry and reflectance scans. This is then used to generate animation based on one or more video streams of a performance recorded in arbitrary environments. What's remarkable is just how accurately the system is able to map and recreate the subtleties of non-markered facial movement, without any geometry drift and without the need for any specialised rendering tools or tricks. The technique is due to be shown at this year's SIGGRAPH event, but in the meantime you can read more about the technique at the ICT webpage here, and see the technique in action and explained in the video below.