High-frequency neural activity predicts ...
Type de document :
Article dans une revue scientifique
DOI :
PMID :
URL permanente :
Titre :
High-frequency neural activity predicts word parsing in ambiguous speech streams
Auteur(s) :
Kösem, Anne [Auteur]
Max Planck Institute for Psycholinguistics
Radboud University [Nijmegen]
Neuroimagerie cognitive - Psychologie cognitive expérimentale [UNICOG-U992]
Basirat, Anahita [Auteur]
Laboratoire Sciences Cognitives et Sciences Affectives - UMR 9193 [SCALab]
Neuroimagerie cognitive - Psychologie cognitive expérimentale [UNICOG-U992]
Sciences Cognitives et Sciences Affectives (SCALab) - UMR 9193
Azizi, Leila [Auteur]
Neuroimagerie cognitive - Psychologie cognitive expérimentale [UNICOG-U992]
van Wassenhove, Virginie [Auteur]
Neuroimagerie cognitive - Psychologie cognitive expérimentale [UNICOG-U992]
Max Planck Institute for Psycholinguistics
Radboud University [Nijmegen]
Neuroimagerie cognitive - Psychologie cognitive expérimentale [UNICOG-U992]
Basirat, Anahita [Auteur]
Laboratoire Sciences Cognitives et Sciences Affectives - UMR 9193 [SCALab]
Neuroimagerie cognitive - Psychologie cognitive expérimentale [UNICOG-U992]
Sciences Cognitives et Sciences Affectives (SCALab) - UMR 9193
Azizi, Leila [Auteur]
Neuroimagerie cognitive - Psychologie cognitive expérimentale [UNICOG-U992]
van Wassenhove, Virginie [Auteur]
Neuroimagerie cognitive - Psychologie cognitive expérimentale [UNICOG-U992]
Titre de la revue :
Journal of Neurophysiology
Nom court de la revue :
J. Neurophysiol.
Numéro :
116
Pagination :
2497-2512
Date de publication :
2016-12-01
ISSN :
1522-1598
Discipline(s) HAL :
Sciences cognitives
Résumé en anglais : [en]
During speech listening, the brain parses a continuous acoustic stream of information into computational units (e.g., syllables or words) necessary for speech comprehension. Recent neuroscientific hypotheses have proposed ...
Lire la suite >During speech listening, the brain parses a continuous acoustic stream of information into computational units (e.g., syllables or words) necessary for speech comprehension. Recent neuroscientific hypotheses have proposed that neural oscillations contribute to speech parsing, but whether they do so on the basis of acoustic cues (bottom-up acoustic parsing) or as a function of available linguistic representations (top-down linguistic parsing) is unknown. In this magnetoencephalography study, we contrasted acoustic and linguistic parsing using bistable speech sequences. While listening to the speech sequences, participants were asked to maintain one of the two possible speech percepts through volitional control. We predicted that the tracking of speech dynamics by neural oscillations would not only follow the acoustic properties but also shift in time according to the participant's conscious speech percept. Our results show that the latency of high-frequency activity (specifically, beta and gamma bands) varied as a function of the perceptual report. In contrast, the phase of low-frequency oscillations was not strongly affected by top-down control. Whereas changes in low-frequency neural oscillations were compatible with the encoding of prelexical segmentation cues, high-frequency activity specifically informed on an individual's conscious speech percept.Lire moins >
Lire la suite >During speech listening, the brain parses a continuous acoustic stream of information into computational units (e.g., syllables or words) necessary for speech comprehension. Recent neuroscientific hypotheses have proposed that neural oscillations contribute to speech parsing, but whether they do so on the basis of acoustic cues (bottom-up acoustic parsing) or as a function of available linguistic representations (top-down linguistic parsing) is unknown. In this magnetoencephalography study, we contrasted acoustic and linguistic parsing using bistable speech sequences. While listening to the speech sequences, participants were asked to maintain one of the two possible speech percepts through volitional control. We predicted that the tracking of speech dynamics by neural oscillations would not only follow the acoustic properties but also shift in time according to the participant's conscious speech percept. Our results show that the latency of high-frequency activity (specifically, beta and gamma bands) varied as a function of the perceptual report. In contrast, the phase of low-frequency oscillations was not strongly affected by top-down control. Whereas changes in low-frequency neural oscillations were compatible with the encoding of prelexical segmentation cues, high-frequency activity specifically informed on an individual's conscious speech percept.Lire moins >
Langue :
Anglais
Audience :
Non spécifiée
Établissement(s) :
Université de Lille
CNRS
CHU Lille
CNRS
CHU Lille
Équipe(s) de recherche :
Équipe Langage
Date de dépôt :
2019-02-13T14:48:19Z
2020-02-11T16:34:13Z
2021-04-09T07:50:25Z
2020-02-11T16:34:13Z
2021-04-09T07:50:25Z