Preliminary study for intonation classification ...
Document type :
Autre communication scientifique (congrès sans actes - poster - séminaire...): Communication dans un congrès avec actes
Title :
Preliminary study for intonation classification of imagined speech for brain-computer interface applications
Author(s) :
Casso, Isabel [Auteur]
Laboratoire Sciences Cognitives et Sciences Affectives - UMR 9193 [SCALab]
Rouillard, Jose [Auteur]
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Si-Mohammed, Hakim [Auteur]
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Betrouni, Nacim [Auteur]
Lille Neurosciences & Cognition - U 1172 [LilNCog]
Cabestaing, Francois [Auteur]
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Basirat, Anahita [Auteur]
Laboratoire Sciences Cognitives et Sciences Affectives - UMR 9193 [SCALab]
Laboratoire Sciences Cognitives et Sciences Affectives - UMR 9193 [SCALab]
Rouillard, Jose [Auteur]

Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Si-Mohammed, Hakim [Auteur]
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Betrouni, Nacim [Auteur]

Lille Neurosciences & Cognition - U 1172 [LilNCog]
Cabestaing, Francois [Auteur]

Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Basirat, Anahita [Auteur]
Laboratoire Sciences Cognitives et Sciences Affectives - UMR 9193 [SCALab]
Conference title :
EUSIPCO
City :
Belgrade
Start date of the conference :
2022-08-29
English keyword(s) :
Intonation
imagined speech
EEG
brain-computer interfaces
prosody
BCI brain computer interface
imagined speech
EEG
brain-computer interfaces
prosody
BCI brain computer interface
HAL domain(s) :
Informatique [cs]/Traitement du signal et de l'image [eess.SP]
Informatique [cs]/Interface homme-machine [cs.HC]
Informatique [cs]/Interface homme-machine [cs.HC]
English abstract : [en]
In the current study, we focused on decoding speech prosody from EEG. Prosody (i.e., melody and rhythm of speech) is important during communication as it allows to convey emotion and meaning. However, it has received little ...
Show more >In the current study, we focused on decoding speech prosody from EEG. Prosody (i.e., melody and rhythm of speech) is important during communication as it allows to convey emotion and meaning. However, it has received little attention in the field of brain-computer interfaces. To address this issue, we contrasted the production of two syllables, "ba" and "da", produced mentally as an affirmation (e.g., "ba.") or a question (e.g., "ba?") using two different intonations. We focused on spectral features. After classification in the time-frequency domain, we found above chance-level accuracies in specific frequency ranges of the alpha band (7-12 Hz) early on during the production phase. We also obtained above chance-level results on a range of the low-beta band (16-20 Hz) during a late time window. Based on the visual inspection of topographies and the literature, we suggest that the results during the early time window, but not that during the late time window, reflect a genuine difference between imagined affirmation and question production. Future studies should provide more information about neural markers and underlying neuro-cognitive processes to improve the understanding of the imagined intonation production. This would pave the way for the development of speech-based BCI capable of differentiating intonation and prosody in general.Show less >
Show more >In the current study, we focused on decoding speech prosody from EEG. Prosody (i.e., melody and rhythm of speech) is important during communication as it allows to convey emotion and meaning. However, it has received little attention in the field of brain-computer interfaces. To address this issue, we contrasted the production of two syllables, "ba" and "da", produced mentally as an affirmation (e.g., "ba.") or a question (e.g., "ba?") using two different intonations. We focused on spectral features. After classification in the time-frequency domain, we found above chance-level accuracies in specific frequency ranges of the alpha band (7-12 Hz) early on during the production phase. We also obtained above chance-level results on a range of the low-beta band (16-20 Hz) during a late time window. Based on the visual inspection of topographies and the literature, we suggest that the results during the early time window, but not that during the late time window, reflect a genuine difference between imagined affirmation and question production. Future studies should provide more information about neural markers and underlying neuro-cognitive processes to improve the understanding of the imagined intonation production. This would pave the way for the development of speech-based BCI capable of differentiating intonation and prosody in general.Show less >
Language :
Anglais
Peer reviewed article :
Oui
Audience :
Internationale
Popular science :
Non
Collections :
Source :
Files
- https://hal.archives-ouvertes.fr/hal-03769268/document
- Open access
- Access the document
- https://hal.archives-ouvertes.fr/hal-03769268/document
- Open access
- Access the document
- https://hal.archives-ouvertes.fr/hal-03769268/document
- Open access
- Access the document
- document
- Open access
- Access the document
- EUSIPCO_COMREV_Paper_reviewer_corrections_Final.pdf
- Open access
- Access the document