Crossmodal hierarchical predictive coding for audiovisual sequences in the human brain

Predictive coding theory suggests the brain anticipates sensory information using prior knowledge. While this theory has been extensively researched within individual sensory modalities, evidence for predictive processing across sensory modalities is limited. Here, we examine how crossmodal knowledg...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Communications biology Ročník 7; číslo 1; s. 965 - 15
Hlavní autoři: Huang, Yiyuan Teresa, Wu, Chien-Te, Fang, Yi-Xin Miranda, Fu, Chin-Kun, Koike, Shinsuke, Chao, Zenas C.
Médium: Journal Article
Jazyk:angličtina
Vydáno: London Nature Publishing Group UK 09.08.2024
Nature Publishing Group
Nature Portfolio
Témata:
ISSN:2399-3642, 2399-3642
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Predictive coding theory suggests the brain anticipates sensory information using prior knowledge. While this theory has been extensively researched within individual sensory modalities, evidence for predictive processing across sensory modalities is limited. Here, we examine how crossmodal knowledge is represented and learned in the brain, by identifying the hierarchical networks underlying crossmodal predictions when information of one sensory modality leads to a prediction in another modality. We record electroencephalogram (EEG) during a crossmodal audiovisual local-global oddball paradigm, in which the predictability of transitions between tones and images are manipulated at both the stimulus and sequence levels. To dissect the complex predictive signals in our EEG data, we employed a model-fitting approach to untangle neural interactions across modalities and hierarchies. The model-fitting result demonstrates that audiovisual integration occurs at both the levels of individual stimulus interactions and multi-stimulus sequences. Furthermore, we identify the spatio-spectro-temporal signatures of prediction-error signals across hierarchies and modalities, and reveal that auditory and visual prediction errors are rapidly redirected to the central-parietal electrodes during learning through alpha-band interactions. Our study suggests a crossmodal predictive coding mechanism where unimodal predictions are processed by distributed brain networks to form crossmodal knowledge. A generalized framework for predictive coding across modalities and hierarchies reveals how the brain represents and learns crossmodal knowledge in sequences.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:2399-3642
2399-3642
DOI:10.1038/s42003-024-06677-6