Avatar Reaction to Multimodal Human Behavior
Document type :
Autre communication scientifique (congrès sans actes - poster - séminaire...): Communication dans un congrès avec actes
Title :
Avatar Reaction to Multimodal Human Behavior
Author(s) :
Chopin, Baptiste [Auteur]
Spatio-Temporal Activity Recognition Systems [STARS]
Daoudi, Mohamed [Auteur]
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Ecole nationale supérieure Mines-Télécom Lille Douai [IMT Nord Europe]
Bartolo, Angela [Auteur]
Laboratoire Sciences Cognitives et Sciences Affectives - UMR 9193 [SCALab]
Spatio-Temporal Activity Recognition Systems [STARS]
Daoudi, Mohamed [Auteur]
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Ecole nationale supérieure Mines-Télécom Lille Douai [IMT Nord Europe]
Bartolo, Angela [Auteur]
Laboratoire Sciences Cognitives et Sciences Affectives - UMR 9193 [SCALab]
Conference title :
ICIAP 2023- 22nd International Conference on Image Analysis and Processing
City :
Udine
Country :
Italie
Start date of the conference :
2023-09-11
English keyword(s) :
avatar reaction
human motion prediction
facial expression
human motion prediction
facial expression
HAL domain(s) :
Informatique [cs]/Vision par ordinateur et reconnaissance de formes [cs.CV]
Sciences du Vivant [q-bio]/Neurosciences [q-bio.NC]/Sciences cognitives
Sciences du Vivant [q-bio]/Neurosciences [q-bio.NC]/Sciences cognitives
English abstract : [en]
In this paper, we propose a virtual agent application. We develop a virtual agent that reacts to gestures and a virtual environment in which it can interact with the user. We capture motion with a Kinect V2 camera, predict ...
Show more >In this paper, we propose a virtual agent application. We develop a virtual agent that reacts to gestures and a virtual environment in which it can interact with the user. We capture motion with a Kinect V2 camera, predict the end of the motion and then classify it. The application also features a facial expression recognition module. In addition, to all these modules, we include also OpenAI conversation module. The application can also be used with a virtual reality headset.Show less >
Show more >In this paper, we propose a virtual agent application. We develop a virtual agent that reacts to gestures and a virtual environment in which it can interact with the user. We capture motion with a Kinect V2 camera, predict the end of the motion and then classify it. The application also features a facial expression recognition module. In addition, to all these modules, we include also OpenAI conversation module. The application can also be used with a virtual reality headset.Show less >
Language :
Anglais
Peer reviewed article :
Oui
Audience :
Internationale
Popular science :
Non
ANR Project :
Comment :
Workshops session
Collections :
Source :
Files
- document
- Open access
- Access the document
- GHB2023.pdf
- Open access
- Access the document
- document
- Open access
- Access the document
- GHB2023.pdf
- Open access
- Access the document