Sparse to Dense Dynamic 3D Facial Expression ...
Document type :
Communication dans un congrès avec actes
Title :
Sparse to Dense Dynamic 3D Facial Expression Generation
Author(s) :
Otberdout, Naima [Auteur]
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Ferrari, Claudio [Auteur]
Daoudi, Mohamed [Auteur]
Ecole nationale supérieure Mines-Télécom Lille Douai [IMT Nord Europe]
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Berretti, Stefano [Auteur]
Università degli Studi di Firenze = University of Florence = Université de Florence [UniFI]
Bimbo, Alberto [Auteur]
Università degli Studi di Firenze = University of Florence = Université de Florence [UniFI]
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Ferrari, Claudio [Auteur]
Daoudi, Mohamed [Auteur]

Ecole nationale supérieure Mines-Télécom Lille Douai [IMT Nord Europe]
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Berretti, Stefano [Auteur]
Università degli Studi di Firenze = University of Florence = Université de Florence [UniFI]
Bimbo, Alberto [Auteur]
Università degli Studi di Firenze = University of Florence = Université de Florence [UniFI]
Conference title :
IEEE/CVF Computer Vision and Pattern Recognition Conference
City :
New Orleans
Country :
Etats-Unis d'Amérique
Start date of the conference :
2022-06-19
HAL domain(s) :
Informatique [cs]/Vision par ordinateur et reconnaissance de formes [cs.CV]
English abstract : [en]
In this paper, we propose a solution to the task of generating dynamic 3D facial expressions from a neutral 3D face and an expression label. This involves solving two sub-problems: (i) modeling the temporal dynamics of ...
Show more >In this paper, we propose a solution to the task of generating dynamic 3D facial expressions from a neutral 3D face and an expression label. This involves solving two sub-problems: (i) modeling the temporal dynamics of expressions, and (ii) deforming the neutral mesh to obtain the expressive counterpart. We represent the temporal evolution of expressions using the motion of a sparse set of 3D landmarks that we learn to generate by training a manifold-valued GAN (Motion3DGAN). To better encode the expression-induced deformation and disentangle it from the identity information, the generated motion is represented as per-frame displacement from a neutral configuration. To generate the expressive meshes, we train a Sparse2Dense mesh Decoder (S2D-Dec) that maps the landmark displacements to a dense, per-vertex displacement. This allows us to learn how the motion of a sparse set of landmarks influences the deformation of the overall face surface, independently from the identity. Experimental results on the CoMA and D3DFACS datasets show that our solution brings significant improvements with respect to previous solutions in terms of both dynamic expression generation and mesh reconstruction, while retaining good generalization to unseen data. The code and the pretrained model will be made publicly available.Show less >
Show more >In this paper, we propose a solution to the task of generating dynamic 3D facial expressions from a neutral 3D face and an expression label. This involves solving two sub-problems: (i) modeling the temporal dynamics of expressions, and (ii) deforming the neutral mesh to obtain the expressive counterpart. We represent the temporal evolution of expressions using the motion of a sparse set of 3D landmarks that we learn to generate by training a manifold-valued GAN (Motion3DGAN). To better encode the expression-induced deformation and disentangle it from the identity information, the generated motion is represented as per-frame displacement from a neutral configuration. To generate the expressive meshes, we train a Sparse2Dense mesh Decoder (S2D-Dec) that maps the landmark displacements to a dense, per-vertex displacement. This allows us to learn how the motion of a sparse set of landmarks influences the deformation of the overall face surface, independently from the identity. Experimental results on the CoMA and D3DFACS datasets show that our solution brings significant improvements with respect to previous solutions in terms of both dynamic expression generation and mesh reconstruction, while retaining good generalization to unseen data. The code and the pretrained model will be made publicly available.Show less >
Language :
Anglais
Peer reviewed article :
Oui
Audience :
Internationale
Popular science :
Non
ANR Project :
Collections :
Source :
Files
- https://hal.archives-ouvertes.fr/hal-03598633/document
- Open access
- Access the document
- https://hal.archives-ouvertes.fr/hal-03598633/document
- Open access
- Access the document
- https://hal.archives-ouvertes.fr/hal-03598633/document
- Open access
- Access the document
- document
- Open access
- Access the document
- Arxiv_CVPR_2022.pdf
- Open access
- Access the document
- document
- Open access
- Access the document
- Arxiv_CVPR_2022.pdf
- Open access
- Access the document