The Sabancı University Dynamic Face Database ...
Document type :
Article dans une revue scientifique
Permalink :
Title :
The Sabancı University Dynamic Face Database (SUDFace): Development and validation of an audiovisual stimulus set of recited and free speeches with neutral facial expressions
Author(s) :
Sentürk, YD. [Auteur]
Tavacioglu, EE. [Auteur]
Duymaz, I. [Auteur]
Sayim, Bilge [Auteur]
Sciences Cognitives et Sciences Affectives (SCALab) - UMR 9193
Alp, N. [Auteur]
Tavacioglu, EE. [Auteur]
Duymaz, I. [Auteur]
Sayim, Bilge [Auteur]
Sciences Cognitives et Sciences Affectives (SCALab) - UMR 9193
Alp, N. [Auteur]
Journal title :
Behavior Research Methods Instruments and Computers
Volume number :
55
Pages :
p.3078–3099
Publication date :
2023-09
English keyword(s) :
Face database
Dynamic face
Neutral face
Natural face
Face recognition
Speech recognition
Dynamic face
Neutral face
Natural face
Face recognition
Speech recognition
HAL domain(s) :
Sciences cognitives
English abstract : [en]
Faces convey a wide range of information, including one’s identity, and emotional and mental states. Face perception is a major research topic in many research fields, such as cognitive science, social psychology, and ...
Show more >Faces convey a wide range of information, including one’s identity, and emotional and mental states. Face perception is a major research topic in many research fields, such as cognitive science, social psychology, and neuroscience. Frequently, stimuli are selected from a range of available face databases. However, even though faces are highly dynamic, most databases consist of static face stimuli. Here, we introduce the Sabancı University Dynamic Face (SUDFace) database. The SUDFace database consists of 150 high-resolution audiovisual videos acquired in a controlled lab environment and stored with a resolution of 1920 × 1080 pixels at a frame rate of 60 Hz. The multimodal database consists of three videos of each human model in frontal view in three different conditions: vocalizing two scripted texts (conditions 1 and 2) and one Free Speech (condition 3). The main focus of the SUDFace database is to provide a large set of dynamic faces with neutral facial expressions and natural speech articulation. Variables such as face orientation, illumination, and accessories (piercings, earrings, facial hair, etc.) were kept constant across all stimuli. We provide detailed stimulus information, including facial features (pixel-wise calculations of face length, eye width, etc.) and speeches (e.g., duration of speech and repetitions). In two validation experiments, a total number of 227 participants rated each video on several psychological dimensions (e.g., neutralness and naturalness of expressions, valence, and the perceived mental states of the models) using Likert scales. The database is freely accessible for research purposes.Show less >
Show more >Faces convey a wide range of information, including one’s identity, and emotional and mental states. Face perception is a major research topic in many research fields, such as cognitive science, social psychology, and neuroscience. Frequently, stimuli are selected from a range of available face databases. However, even though faces are highly dynamic, most databases consist of static face stimuli. Here, we introduce the Sabancı University Dynamic Face (SUDFace) database. The SUDFace database consists of 150 high-resolution audiovisual videos acquired in a controlled lab environment and stored with a resolution of 1920 × 1080 pixels at a frame rate of 60 Hz. The multimodal database consists of three videos of each human model in frontal view in three different conditions: vocalizing two scripted texts (conditions 1 and 2) and one Free Speech (condition 3). The main focus of the SUDFace database is to provide a large set of dynamic faces with neutral facial expressions and natural speech articulation. Variables such as face orientation, illumination, and accessories (piercings, earrings, facial hair, etc.) were kept constant across all stimuli. We provide detailed stimulus information, including facial features (pixel-wise calculations of face length, eye width, etc.) and speeches (e.g., duration of speech and repetitions). In two validation experiments, a total number of 227 participants rated each video on several psychological dimensions (e.g., neutralness and naturalness of expressions, valence, and the perceived mental states of the models) using Likert scales. The database is freely accessible for research purposes.Show less >
Language :
Anglais
Audience :
Internationale
Popular science :
Non
Administrative institution(s) :
Université de Lille
CNRS
CHU Lille
CNRS
CHU Lille
Research team(s) :
Équipe Action, Vision et Apprentissage (AVA)
Submission date :
2024-01-18T07:56:56Z
2024-02-12T13:23:11Z
2024-02-12T13:24:54Z
2024-02-12T13:23:11Z
2024-02-12T13:24:54Z
Files
- document
- Open access
- Access the document