Considering the face as an object that moves through a scene, the posture related to the camera’s point of view and the texture both may change the aspect of the object considerably. These changes are tightly coupled with the alterations in illumination conditions when the subject moves or even when some modifications happen in illumination conditions (light switched on or off etc.). This paper presents a method for tracking a face on a video sequence by recovering the full-motion and the expression deformations of the head using 3D expressive head model. Taking advantage from a 3D triangle based face model, we are able to deal with any kind of illumination changes and face expression movements. In this parametric model, any changes can be defined as a linear combination of a set of weighted basis that could easily be included in a minimization algorithm using a classical Newton optimization approach. The 3D model of the face is created using some characteristic face points given on the first frame. Using a gradient descent approach, the algorithm is able to extract simultaneously the parameters related to the face expression, the 3D posture and the virtual illumination conditions. The algorithm has been tested on Kanade-Cohn database (Kanade et al., 2000) for expression estimation and its precision has been compared with a standard multi-camera system for the 3D tracking. (G. Ferrigno and A. Pedotti, 1985). Regarding illumination tests, we use synthetic movie created using standard 3D-mesh animation tools and real experimental videos created in very extreme illumination condition. The results in all the cases are promising even with great head movements and changes in the expression and the illumination conditions. The proposed approach has a twofold application as a part of a facial expression analysis system and preprocessing for identification systems (expression, pose and illumination normalization).

Face tracking algorithm robust to pose, illumination and face expression changes : a 3D parametric model approach / M. Anisetti, V. Bellandi, F. Beverina, L. Arnone - In: VISAPP 2006 : proceedings of the first International conference on computer vision theory and applications : Setúbal, Portugal, february 25-28, 2006 / [a cura di] A. Ranchordas, H. Araujo, B. Encarnacao. - Setúbal : INSTICC press, 2006. - ISBN 9728865406. - pp. 318-325 (( Intervento presentato al 1. convegno International Conference on Computer Vision Theory and Applications (VISAPP) tenutosi a Setubal, Portogallo nel 2006.

Face tracking algorithm robust to pose, illumination and face expression changes : a 3D parametric model approach

M. Anisetti
Primo
;
V. Bellandi
Secondo
;
2006

Abstract

Considering the face as an object that moves through a scene, the posture related to the camera’s point of view and the texture both may change the aspect of the object considerably. These changes are tightly coupled with the alterations in illumination conditions when the subject moves or even when some modifications happen in illumination conditions (light switched on or off etc.). This paper presents a method for tracking a face on a video sequence by recovering the full-motion and the expression deformations of the head using 3D expressive head model. Taking advantage from a 3D triangle based face model, we are able to deal with any kind of illumination changes and face expression movements. In this parametric model, any changes can be defined as a linear combination of a set of weighted basis that could easily be included in a minimization algorithm using a classical Newton optimization approach. The 3D model of the face is created using some characteristic face points given on the first frame. Using a gradient descent approach, the algorithm is able to extract simultaneously the parameters related to the face expression, the 3D posture and the virtual illumination conditions. The algorithm has been tested on Kanade-Cohn database (Kanade et al., 2000) for expression estimation and its precision has been compared with a standard multi-camera system for the 3D tracking. (G. Ferrigno and A. Pedotti, 1985). Regarding illumination tests, we use synthetic movie created using standard 3D-mesh animation tools and real experimental videos created in very extreme illumination condition. The results in all the cases are promising even with great head movements and changes in the expression and the illumination conditions. The proposed approach has a twofold application as a part of a facial expression analysis system and preprocessing for identification systems (expression, pose and illumination normalization).
Settore INF/01 - Informatica
2006
Book Part (author)
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/44162
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 8
  • ???jsp.display-item.citation.isi??? 0
social impact