The Sabancı University Dynamic Face Database ...
Type de document :
Article dans une revue scientifique
URL permanente :
Titre :
The Sabancı University Dynamic Face Database (SUDFace): Development and validation of an audiovisual stimulus set of recited and free speeches with neutral facial expressions
Auteur(s) :
Sentürk, YD. [Auteur]
Tavacioglu, EE. [Auteur]
Duymaz, I. [Auteur]
Sayim, Bilge [Auteur]
Sciences Cognitives et Sciences Affectives (SCALab) - UMR 9193
Alp, N. [Auteur]
Tavacioglu, EE. [Auteur]
Duymaz, I. [Auteur]
Sayim, Bilge [Auteur]
Sciences Cognitives et Sciences Affectives (SCALab) - UMR 9193
Alp, N. [Auteur]
Titre de la revue :
Behavior Research Methods Instruments and Computers
Numéro :
55
Pagination :
p.3078–3099
Date de publication :
2023-09
Mot(s)-clé(s) en anglais :
Face database
Dynamic face
Neutral face
Natural face
Face recognition
Speech recognition
Dynamic face
Neutral face
Natural face
Face recognition
Speech recognition
Discipline(s) HAL :
Sciences cognitives
Résumé en anglais : [en]
Faces convey a wide range of information, including one’s identity, and emotional and mental states. Face perception is a major research topic in many research fields, such as cognitive science, social psychology, and ...
Lire la suite >Faces convey a wide range of information, including one’s identity, and emotional and mental states. Face perception is a major research topic in many research fields, such as cognitive science, social psychology, and neuroscience. Frequently, stimuli are selected from a range of available face databases. However, even though faces are highly dynamic, most databases consist of static face stimuli. Here, we introduce the Sabancı University Dynamic Face (SUDFace) database. The SUDFace database consists of 150 high-resolution audiovisual videos acquired in a controlled lab environment and stored with a resolution of 1920 × 1080 pixels at a frame rate of 60 Hz. The multimodal database consists of three videos of each human model in frontal view in three different conditions: vocalizing two scripted texts (conditions 1 and 2) and one Free Speech (condition 3). The main focus of the SUDFace database is to provide a large set of dynamic faces with neutral facial expressions and natural speech articulation. Variables such as face orientation, illumination, and accessories (piercings, earrings, facial hair, etc.) were kept constant across all stimuli. We provide detailed stimulus information, including facial features (pixel-wise calculations of face length, eye width, etc.) and speeches (e.g., duration of speech and repetitions). In two validation experiments, a total number of 227 participants rated each video on several psychological dimensions (e.g., neutralness and naturalness of expressions, valence, and the perceived mental states of the models) using Likert scales. The database is freely accessible for research purposes.Lire moins >
Lire la suite >Faces convey a wide range of information, including one’s identity, and emotional and mental states. Face perception is a major research topic in many research fields, such as cognitive science, social psychology, and neuroscience. Frequently, stimuli are selected from a range of available face databases. However, even though faces are highly dynamic, most databases consist of static face stimuli. Here, we introduce the Sabancı University Dynamic Face (SUDFace) database. The SUDFace database consists of 150 high-resolution audiovisual videos acquired in a controlled lab environment and stored with a resolution of 1920 × 1080 pixels at a frame rate of 60 Hz. The multimodal database consists of three videos of each human model in frontal view in three different conditions: vocalizing two scripted texts (conditions 1 and 2) and one Free Speech (condition 3). The main focus of the SUDFace database is to provide a large set of dynamic faces with neutral facial expressions and natural speech articulation. Variables such as face orientation, illumination, and accessories (piercings, earrings, facial hair, etc.) were kept constant across all stimuli. We provide detailed stimulus information, including facial features (pixel-wise calculations of face length, eye width, etc.) and speeches (e.g., duration of speech and repetitions). In two validation experiments, a total number of 227 participants rated each video on several psychological dimensions (e.g., neutralness and naturalness of expressions, valence, and the perceived mental states of the models) using Likert scales. The database is freely accessible for research purposes.Lire moins >
Langue :
Anglais
Audience :
Internationale
Vulgarisation :
Non
Établissement(s) :
Université de Lille
CNRS
CHU Lille
CNRS
CHU Lille
Équipe(s) de recherche :
Équipe Action, Vision et Apprentissage (AVA)
Date de dépôt :
2024-01-18T07:56:56Z
2024-02-12T13:23:11Z
2024-02-12T13:24:54Z
2024-02-12T13:23:11Z
2024-02-12T13:24:54Z
Fichiers
- document
- Accès libre
- Accéder au document