A Morphable Model For The Synthesis Of 3D Faces
Volker Blanz Thomas Vetter
Max-Planck-Institut f¨ur biologische Kybernetik,
T¨ubingen, Germany
Abstract
In this paper, a new technique for modeling textured 3D faces is
introduced. 3D faces can either be generated automatically from
one or more photographs, or modeled directly through an intuitive
user interface. Users are assisted in two key problems of computer
aided face modeling. First, new face images or new 3D face mod-
els can be registered automatically by computing dense one-to-one
correspondence to an internal face model. Second, the approach
regulates the naturalness of modeled faces avoiding faces with an
“unlikely” appearance.
Starting from an example set of 3D face models, we derive a
morphable face model by transforming the shape and texture of the
examples into a vector space representation. New faces and expres-
sions can be modeled by forming linear combinations of the proto-
types. Shape and texture constraints derived from the statistics of
our example faces are used to guide manual modeling or automated
matching algorithms.
We show 3D face reconstructions from single images and their
applications for photo-realistic image manipulations. We also
demonstrate face manipulations according to complex parameters
such as gender, fullness of a face or its distinctiveness.
Keywords: facial modeling, registration, photogrammetry, mor-
phing, facial animation, computer vision
1 Introduction
Computer aided modeling of human faces still requires a great deal
of expertise and manual control to avoid unrealistic, non-face-like
results. Most limitations of automated techniques for face synthe-
sis, face animation or for general changes in the appearance of an
individual face can be described either as the problem of finding
corresponding feature locations in different faces or as the problem
of separating realistic faces from faces that could never appear in
the real world. The correspondence problem is crucial for all mor-
phing techniques, both for the application of motion-capture data
to pictures or 3D face models, and for most 3D face reconstruction
techniques from images. A limited number of labeled feature points
marked in one face, e.g., the tip of the nose, the eye corner and less
prominent points on the cheek, must be located precisely in another
face. The number of manually labeled feature points varies from
MPI f¨ur biol. Kybernetik, Spemannstr. 38, 72076 T¨ubingen, Germany.
E-mail:
f
volker.blanz, thomas.vetter
g
@tuebingen.mpg.de
Modeler
Morphable
Face Model
Face
Analyzer
3D Database
2D Input 3D Output
Figure 1: Derived from a dataset of prototypical 3D scans of faces,
the morphable face model contributes to two main steps in face
manipulation: (1) deriving a 3D face model from a novel image,
and (2) modifying shape and texture in a natural way.
application to application, but usually ranges from 50 to 300.
Only a correct alignment of all these points allows acceptable in-
termediate morphs, a convincing mapping of motion data from the
reference to a new model, or the adaptation of a 3D face model to
2D images for ‘video cloning’. Human knowledge and experience
is necessary to compensate for the variations between individual
faces and to guarantee a valid location assignment in the different
faces. At present, automated matching techniques can be utilized
only for very prominent feature points such as the corners of eyes
and mouth.
A second type of problem in face modeling is the separation of
natural faces from non faces. For this, human knowledge is even
more critical. Many applications involve the design of completely
new natural looking faces that can occur in the real world but which
have no “real” counterpart. Others require the manipulation of an
existing face according to changes in age, body weight or simply to
emphasize the characteristics of the face. Such tasks usually require
time-consuming manual work combined with the skills of an artist.
In this paper, we present a parametric face modeling technique
that assists in both problems. First, arbitrary human faces can be
created simultaneously controlling the likelihood of the generated
faces. Second, the system is able to compute correspondence be-
tween new faces. Exploiting the statistics of a large dataset of 3D
face scans (geometric and textural data,
C yberware
TM
) we built
a morphable face model and recover domain knowledge about face
variations by applying pattern classification methods. The mor-
phable face model is a multidimensional 3D morphing function that
is based on the linear combination of a large number of 3D face
scans. Computing the average face and the main modes of vari-
ation in our dataset, a probability distribution is imposed on the
morphing function to avoid unlikely faces. We also derive paramet-
ric descriptions of face attributes such as gender, distinctiveness,
“hooked” noses or the weight of a person, by evaluating the distri-
bution of exemplar faces for each attribute within our face space.
Having constructed a parametric face model that is able to gener-
ate almost any face, the correspondence problem turns into a mathe-
matical optimization problem. New faces, images or 3D face scans,
can be registered by minimizing the difference between the new
face and its reconstruction by the face model function. We devel-