To address these problems, we propose a new Structure-Driven-Adaptation (SDA) method to efficient reconstruction of animated 3D faces of real human individuals. The technique is based on adapting a prototype generic facial model to the acquired surface data in an “outside-in” manner: deformation applied to the external skin layer is propagated along with the subsequent transformations to the muscles, with the final effect of morphing the underlying skull. The generic control model has a known topology and incorporates an anatomy-based layered structure hierarchy of physically-based skin, muscles, and skull. What is unique about our approach is that the layered representation is utilized not only to produce appropriate skin deformations during animation, but also to generate the model itself. Geometry and texture information of the faces of real individuals is acquired by using a laser range scanner. Starting with interactive specification of a set of anthropometric landmarks on the generic control model and scanned surface, a global alignment automatically adapts the position, size and orientation of the generic control model to align it with the scan data based on a series of measurements between a subset of landmarks. The physically-based face shape adaptation then fits positions of all vertices on the generic control model to the scan surface data. The generic mesh is modeled as a dynamic deformable surface. Deformation of the mesh results from the action of internal force which imposes surface continuity constraints and external forces which attract the surface such that it fits the data. We incorporate the effect of structural differences in muscles and skull - both to generate and animate the model. SDA transfers the muscles to the new geometry of the skin surface. A set of skull feature points is then automatically generated from the new external structural layers by SDA. These feature points are used to deform the attached mesh skull representation, using a volume morphing approach. With the adapted muscle and skull structures, the reconstructed model can be animated immediately to generate various expressions using the given muscle and jaw motion parameters.
Animating this adapted low-resolution control mesh is computationally efficient, while the reconstruction of high-resolution surface detail on the animated control model is controlled separately. A scalar displacement map represents the detail of the high-resolution geometry, providing an efficient representation of the surface shape and allowing control over level of detail. We develop an offset-envelope mapping method to automatically generate a displacement map by mapping the scan data onto the low-resolution control mesh. A hierarchical representation of the model is then constructed to approximate the scanned data-set with increasing accuracy by surface refinement using a triangular mesh subdivision scheme together with resampling of the displacement map. This mechanism enables efficient and seamless animation of the high-resolution human face geometry through the animation control over the adapted control model. |