Avatar Reaction to Multimodal Human Behavior
Type de document :
Communication dans un congrès avec actes
URL permanente :
Titre :
Avatar Reaction to Multimodal Human Behavior
Auteur(s) :
Chopin, Baptiste [Auteur]
Université Côte d'Azur [UniCA]
Daoudi, Mohamed [Auteur]
Centre de Recherche en Informatique, Signal et Automatique de Lille (CRIStAL) - UMR 9189
Bartolo, Angela [Auteur]
Sciences Cognitives et Sciences Affectives (SCALab) - UMR 9193
Université Côte d'Azur [UniCA]
Daoudi, Mohamed [Auteur]
Centre de Recherche en Informatique, Signal et Automatique de Lille (CRIStAL) - UMR 9189
Bartolo, Angela [Auteur]
Sciences Cognitives et Sciences Affectives (SCALab) - UMR 9193
Titre de la manifestation scientifique :
Image Analysis and Processing - ICIAP 2023
Ville :
Udine
Pays :
Italie
Date de début de la manifestation scientifique :
2023-09-11
Titre de l’ouvrage :
Image Analysis and Processing - ICIAP 2023 Workshops. ICIAP 2023. Lecture Notes in Computer Science
Éditeur :
Springer Nature Switzerland
Date de publication :
2024-01-24
Discipline(s) HAL :
Sciences cognitives
Résumé en anglais : [en]
In this paper, we propose a virtual agent application. We develop a virtual agent that reacts to gestures and a virtual environment in which it can interact with the user. We capture motion with a Kinect V2 camera, predict ...
Lire la suite >In this paper, we propose a virtual agent application. We develop a virtual agent that reacts to gestures and a virtual environment in which it can interact with the user. We capture motion with a Kinect V2 camera, predict the end of the motion and then classify it. The application also features a facial expression recognition module. In addition, to all these modules, we include also OpenAI conversation module. The application can also be used with a virtual reality headset.Lire moins >
Lire la suite >In this paper, we propose a virtual agent application. We develop a virtual agent that reacts to gestures and a virtual environment in which it can interact with the user. We capture motion with a Kinect V2 camera, predict the end of the motion and then classify it. The application also features a facial expression recognition module. In addition, to all these modules, we include also OpenAI conversation module. The application can also be used with a virtual reality headset.Lire moins >
Langue :
Anglais
Audience :
Internationale
Vulgarisation :
Non
Établissement(s) :
Université de Lille
CNRS
CHU Lille
CNRS
CHU Lille
Date de dépôt :
2024-03-07T17:37:41Z
2024-03-08T07:49:17Z
2024-03-08T07:49:17Z
Annexes
- document
- Accès libre
- Source du fichier principal
- Accéder au document