Avatar Reaction to Multimodal Human Behavior
Document type :
Communication dans un congrès avec actes
Title :
Avatar Reaction to Multimodal Human Behavior
Author(s) :
Chopin, Baptiste [Auteur]
Spatio-Temporal Activity Recognition Systems [STARS]
Daoudi, Mohamed [Auteur]
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Ecole nationale supérieure Mines-Télécom Lille Douai [IMT Nord Europe]
Bartolo, Angela [Auteur]
Laboratoire Sciences Cognitives et Sciences Affectives - UMR 9193 [SCALab]
Spatio-Temporal Activity Recognition Systems [STARS]
Daoudi, Mohamed [Auteur]
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Ecole nationale supérieure Mines-Télécom Lille Douai [IMT Nord Europe]
Bartolo, Angela [Auteur]
Laboratoire Sciences Cognitives et Sciences Affectives - UMR 9193 [SCALab]
Conference title :
GHB 2023 - Generation of Human Face and Body Behavior Workshop in conjonction with ICIAP
City :
Udine
Country :
Italie
Start date of the conference :
2023-09-11
Book title :
Lecture Notes in Computer Science
Journal title :
Image Analysis and Processing – ICIAP 2023 22nd International Conference, ICIAP 2023, Udine, Italy, September 11–15, 2023, Proceedings, Part I
Publication date :
2023
English keyword(s) :
avatar reaction
human motion prediction
facial expression
human motion prediction
facial expression
HAL domain(s) :
Informatique [cs]/Vision par ordinateur et reconnaissance de formes [cs.CV]
English abstract : [en]
In this paper, we propose a virtual agent application. We develop a virtual agent that reacts to gestures and a virtual environment in which it can interact with the user. We capture motion with a Kinect V2 camera, predict ...
Show more >In this paper, we propose a virtual agent application. We develop a virtual agent that reacts to gestures and a virtual environment in which it can interact with the user. We capture motion with a Kinect V2 camera, predict the end of the motion and then classify it. The application also features a facial expression recognition module. In addition, to all these modules, we include also OpenAI conversation module. The application can also be used with a virtual reality headset.Show less >
Show more >In this paper, we propose a virtual agent application. We develop a virtual agent that reacts to gestures and a virtual environment in which it can interact with the user. We capture motion with a Kinect V2 camera, predict the end of the motion and then classify it. The application also features a facial expression recognition module. In addition, to all these modules, we include also OpenAI conversation module. The application can also be used with a virtual reality headset.Show less >
Language :
Anglais
Peer reviewed article :
Oui
Audience :
Internationale
Popular science :
Non
ANR Project :
Collections :
Source :
Files
- document
- Open access
- Access the document
- GHB2023.pdf
- Open access
- Access the document