Simulaneous tracking of rigid head motion and non-rigid facial animation


INVESTIGATORS

Dr. Yisong Chen et al.

 

KEYWORDS:

Motion analysis, Face tracking, Feature extraction, Pose estimation, Statistical learning, Maximum likelihood estimate.

 

BRIEF DESCRIPTION

A quick and reliable model-based head motion tracking scheme is presented. In this approach, rigid head motion and non-rigid facial animation are robustly tracked simultaneously by statistically analyzing the local regions of several representative facial features. The features are defined and operated based on a mesh model that helps maintain a global constraint on the local features and avoid the time-consuming appearance computation. A statistical model is computed from a moderate training set that is obtained by synthesizing different poses from a given standard initial image. During tracking, feature-based local distributions are obtained directly from the target features and the troublesome feature detection or model rendering process is avoided. The observed distribution is compared with the pre-computed statistical model and the tracking is achieved by minimizing an error function based on the maximum likelihood estimation. The experimental results show that this tracking strategy is robust to freeform head motion, facial animation and illumination changes. The tracking can be conducted in nearly real-time and is easy to recover from failures.

Well, following are several of our face tracking examples. The tracking can be performed in real time with an average frame rate of 20 frames/second.

FUNDING AGENCY

N/A

 

PUBLICATIONS

  1. Yisong Chen and Franck Davoine, Simultaneous tracking of rigid head motion and non-rigid facial animation by analyzing local features statistically, BMVC'2006 .