Matthew Hutson in Science:
If you’ve used the smartphone application Snapchat, you may have turned a photo of yourself into a disco bear or melded your face with someone else’s. Now, a group of researchers has created the most advanced technique yet for building 3D facial models on the computer. The system could improve personalized avatars in video games, facial recognition for security, and—of course—Snapchat filters. When computers process faces, they sometimes rely on a so-called 3D morphable model (3DMM). The model represents an average face, but also contains information on common patterns of deviation from that average. For example, if you have a long nose, you’re also likely to have a long chin. Given such correlations, a computer can then characterize your unique face not by storing every point in a 3D scan, but by listing just a couple hundred numbers describing your deviation from an average face, including parameters that roughly correspond to age, gender, and length of face.
There’s a catch, however. To account for all the ways faces can vary, a 3DMM needs to integrate information on many faces. Until now that has required scanning lots of people and then painstakingly labeling all of their features. Consequently, the current best models are based on only a couple hundred people—mostly white adults—and have limited ability to model people of different ages and races. Now, James Booth, a computer scientist at Imperial College London (ICL), and colleagues have developed a new method that automates the construction of 3DMMs and enables them to incorporate a wider spectrum of humanity. The method has three main steps. First, an algorithm automatically landmarks facial scans—labeling the tip of the nose and other points. Second, another algorithm lines up all the scans according to their landmarks and combines them into a model. Third, an algorithm detects and removes bad scans. “The really big contribution in this work is they show how to fully automate this process,” says William Smith, who studies computer vision at the University of York in the United Kingdom and was not involved in the study. Labeling dozens of facial features on many faces is “pretty tedious,” says Alan Brunton, a computer scientist at the Fraunhofer Institute for Computer Graphics Research in Darmstadt, Germany, who was also uninvolved. “You think it’s relatively easy to click a point, but it’s not always obvious where the corner of the mouth really is, so even when you do this manually you have some error.” But Booth and colleagues didn’t stop there. They applied their method to a set of nearly 10,000 demographically diverse facial scans. The scans were done at a science museum in London by the plastic surgeons Allan Ponniah and David Dunaway, who hoped to improve reconstructive surgery. They approached Stefanos Zafeiriou, a computer scientist at ICL for help analyzing the data. Applying the algorithm to those scans created what they call the “large scale facial model,” or LSFM. In tests against existing models, the LSFM much more accurately represented faces, the authors report in a forthcoming issue of the International Journal of Computer Vision. In one comparison, they created a model of a child’s face from a photograph. Using the LSFM, the model looked like the child. Using one of the most popular existing morphable models—which is based completely on adults—it looked like an unrelated grown-up. Booth and his colleagues even had enough scans to create more-specific morphable models for different races and ages. And their model can automatically classify faces into age groups based on shape.
More here.