Brains and algorithms partially converge in natural language processing

Deep learning algorithms trained to predict masked words from large amount of text have recently been shown to generate activations similar to those of the human brain. However, what drives this similarity remains currently unknown. Here, we systematically compare a variety of deep language models t...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Communications biology Ročník 5; číslo 1; s. 134 - 10
Hlavní autori: Caucheteux, Charlotte, King, Jean-Rémi
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: London Nature Publishing Group UK 16.02.2022
Nature Publishing Group
Nature Portfolio
Predmet:
ISSN:2399-3642, 2399-3642
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Abstract Deep learning algorithms trained to predict masked words from large amount of text have recently been shown to generate activations similar to those of the human brain. However, what drives this similarity remains currently unknown. Here, we systematically compare a variety of deep language models to identify the computational principles that lead them to generate brain-like representations of sentences. Specifically, we analyze the brain responses to 400 isolated sentences in a large cohort of 102 subjects, each recorded for two hours with functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG). We then test where and when each of these algorithms maps onto the brain responses. Finally, we estimate how the architecture, training, and performance of these models independently account for the generation of brain-like representations. Our analyses reveal two main findings. First, the similarity between the algorithms and the brain primarily depends on their ability to predict words from context. Second, this similarity reveals the rise and maintenance of perceptual, lexical, and compositional representations within each cortical region. Overall, this study shows that modern language algorithms partially converge towards brain-like solutions, and thus delineates a promising path to unravel the foundations of natural language processing. Charlotte Caucheteux and Jean-Rémi King examine the ability of transformer neural networks trained on word prediction tasks to fit representations in the human brain measured with fMRI and MEG. Their results provide further insight into the workings of transformer language models and their relevance to brain responses.
AbstractList Deep learning algorithms trained to predict masked words from large amount of text have recently been shown to generate activations similar to those of the human brain. However, what drives this similarity remains currently unknown. Here, we systematically compare a variety of deep language models to identify the computational principles that lead them to generate brain-like representations of sentences. Specifically, we analyze the brain responses to 400 isolated sentences in a large cohort of 102 subjects, each recorded for two hours with functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG). We then test where and when each of these algorithms maps onto the brain responses. Finally, we estimate how the architecture, training, and performance of these models independently account for the generation of brain-like representations. Our analyses reveal two main findings. First, the similarity between the algorithms and the brain primarily depends on their ability to predict words from context. Second, this similarity reveals the rise and maintenance of perceptual, lexical, and compositional representations within each cortical region. Overall, this study shows that modern language algorithms partially converge towards brain-like solutions, and thus delineates a promising path to unravel the foundations of natural language processing. Charlotte Caucheteux and Jean-Rémi King examine the ability of transformer neural networks trained on word prediction tasks to fit representations in the human brain measured with fMRI and MEG. Their results provide further insight into the workings of transformer language models and their relevance to brain responses.
Deep learning algorithms trained to predict masked words from large amount of text have recently been shown to generate activations similar to those of the human brain. However, what drives this similarity remains currently unknown. Here, we systematically compare a variety of deep language models to identify the computational principles that lead them to generate brain-like representations of sentences. Specifically, we analyze the brain responses to 400 isolated sentences in a large cohort of 102 subjects, each recorded for two hours with functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG). We then test where and when each of these algorithms maps onto the brain responses. Finally, we estimate how the architecture, training, and performance of these models independently account for the generation of brain-like representations. Our analyses reveal two main findings. First, the similarity between the algorithms and the brain primarily depends on their ability to predict words from context. Second, this similarity reveals the rise and maintenance of perceptual, lexical, and compositional representations within each cortical region. Overall, this study shows that modern language algorithms partially converge towards brain-like solutions, and thus delineates a promising path to unravel the foundations of natural language processing.
Deep learning algorithms trained to predict masked words from large amount of text have recently been shown to generate activations similar to those of the human brain. However, what drives this similarity remains currently unknown. Here, we systematically compare a variety of deep language models to identify the computational principles that lead them to generate brain-like representations of sentences. Specifically, we analyze the brain responses to 400 isolated sentences in a large cohort of 102 subjects, each recorded for two hours with functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG). We then test where and when each of these algorithms maps onto the brain responses. Finally, we estimate how the architecture, training, and performance of these models independently account for the generation of brain-like representations. Our analyses reveal two main findings. First, the similarity between the algorithms and the brain primarily depends on their ability to predict words from context. Second, this similarity reveals the rise and maintenance of perceptual, lexical, and compositional representations within each cortical region. Overall, this study shows that modern language algorithms partially converge towards brain-like solutions, and thus delineates a promising path to unravel the foundations of natural language processing. Charlotte Caucheteux and Jean-Rémi King examine the ability of transformer neural networks trained on word prediction tasks to fit representations in the human brain measured with fMRI and MEG. Their results provide further insight into the workings of transformer language models and their relevance to brain responses.
Abstract Deep learning algorithms trained to predict masked words from large amount of text have recently been shown to generate activations similar to those of the human brain. However, what drives this similarity remains currently unknown. Here, we systematically compare a variety of deep language models to identify the computational principles that lead them to generate brain-like representations of sentences. Specifically, we analyze the brain responses to 400 isolated sentences in a large cohort of 102 subjects, each recorded for two hours with functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG). We then test where and when each of these algorithms maps onto the brain responses. Finally, we estimate how the architecture, training, and performance of these models independently account for the generation of brain-like representations. Our analyses reveal two main findings. First, the similarity between the algorithms and the brain primarily depends on their ability to predict words from context. Second, this similarity reveals the rise and maintenance of perceptual, lexical, and compositional representations within each cortical region. Overall, this study shows that modern language algorithms partially converge towards brain-like solutions, and thus delineates a promising path to unravel the foundations of natural language processing.
Deep learning algorithms trained to predict masked words from large amount of text have recently been shown to generate activations similar to those of the human brain. However, what drives this similarity remains currently unknown. Here, we systematically compare a variety of deep language models to identify the computational principles that lead them to generate brain-like representations of sentences. Specifically, we analyze the brain responses to 400 isolated sentences in a large cohort of 102 subjects, each recorded for two hours with functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG). We then test where and when each of these algorithms maps onto the brain responses. Finally, we estimate how the architecture, training, and performance of these models independently account for the generation of brain-like representations. Our analyses reveal two main findings. First, the similarity between the algorithms and the brain primarily depends on their ability to predict words from context. Second, this similarity reveals the rise and maintenance of perceptual, lexical, and compositional representations within each cortical region. Overall, this study shows that modern language algorithms partially converge towards brain-like solutions, and thus delineates a promising path to unravel the foundations of natural language processing.Deep learning algorithms trained to predict masked words from large amount of text have recently been shown to generate activations similar to those of the human brain. However, what drives this similarity remains currently unknown. Here, we systematically compare a variety of deep language models to identify the computational principles that lead them to generate brain-like representations of sentences. Specifically, we analyze the brain responses to 400 isolated sentences in a large cohort of 102 subjects, each recorded for two hours with functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG). We then test where and when each of these algorithms maps onto the brain responses. Finally, we estimate how the architecture, training, and performance of these models independently account for the generation of brain-like representations. Our analyses reveal two main findings. First, the similarity between the algorithms and the brain primarily depends on their ability to predict words from context. Second, this similarity reveals the rise and maintenance of perceptual, lexical, and compositional representations within each cortical region. Overall, this study shows that modern language algorithms partially converge towards brain-like solutions, and thus delineates a promising path to unravel the foundations of natural language processing.
ArticleNumber 134
Author Caucheteux, Charlotte
King, Jean-Rémi
Author_xml – sequence: 1
  givenname: Charlotte
  surname: Caucheteux
  fullname: Caucheteux, Charlotte
  email: ccaucheteux@fb.com
  organization: Facebook AI Research, Université Paris-Saclay, Inria, CEA
– sequence: 2
  givenname: Jean-Rémi
  orcidid: 0000-0002-2121-170X
  surname: King
  fullname: King, Jean-Rémi
  email: jeanremi@fb.com
  organization: Facebook AI Research, École normale supérieure, PSL University, CNRS
BackLink https://www.ncbi.nlm.nih.gov/pubmed/35173264$$D View this record in MEDLINE/PubMed
https://hal.science/hal-03361439$$DView record in HAL
BookMark eNp9Uk1v1DAUtFARLaV_gAOKxAUOAX_FSS5IpYK20kpc4GzZznPWq6y92MlK_fd1Ni20e-jJT88z8-bZ8xad-OABofcEfyGYNV8TpxizElNaYoaZKMkrdEZZ25ZMcHrypD5FFyltMMakbVvB-Bt0yipSMyr4Gbr-HpXzqVC-K9TQh-jG9TYVOxVHp4bhrjDB7yH2UDhfeDVOUQ3FoHw_qdzbxWAgJef7d-i1VUOCi4fzHP35-eP31U25-nV9e3W5Ko3geCy5qA23utJ1LTgICp3SGHjLOe4s1Y2wtrIU85pQDTXoznAzO6eWWq44Z-fodtHtgtrIXXRbFe9kUE4eGiH2crZuBpDCNsowlsmi5toozYHhXNZVZUxHWdb6tmjtJr2FzoAf83bPRJ_feLeWfdjLpqmwIDQLfF4E1ke0m8uVnHuYMUE4a_ckYz89DIvh7wRplFuXDAz5LSFMSVJB20Zkf02GfjyCbsIUfX7WA4rkHSqcUR-euv83__FvM6BZACaGlCJYadyoRhfmZdwgCZZzkuSSJJmTJA9JkrNZekR9VH-RxBZSymDfQ_xv-wXWPZg52VI
CitedBy_id crossref_primary_10_1093_scan_nsaf022
crossref_primary_10_1098_rsta_2022_0048
crossref_primary_10_1038_s41593_023_01304_9
crossref_primary_10_1371_journal_pbio_3002622
crossref_primary_10_7554_eLife_96217_3
crossref_primary_10_1162_tacl_a_00698
crossref_primary_10_1080_23273798_2023_2198245
crossref_primary_10_1016_j_neuropsychologia_2024_108962
crossref_primary_10_3389_frai_2023_1062230
crossref_primary_10_1007_s12551_023_01074_5
crossref_primary_10_1038_s41467_023_43713_1
crossref_primary_10_1162_nol_a_00116
crossref_primary_10_1073_pnas_2410196121
crossref_primary_10_1038_s41467_024_52541_w
crossref_primary_10_3389_fncom_2024_1352685
crossref_primary_10_1038_s41586_024_07522_w
crossref_primary_10_1038_s41583_023_00705_w
crossref_primary_10_1038_s43588_025_00863_0
crossref_primary_10_1016_j_brainres_2025_149841
crossref_primary_10_1093_psyrad_kkaf007
crossref_primary_10_1162_nol_a_00125
crossref_primary_10_1016_j_tics_2022_06_010
crossref_primary_10_7554_eLife_96217
crossref_primary_10_1111_cogs_70013
crossref_primary_10_1038_s41597_023_02752_5
crossref_primary_10_1371_journal_pcbi_1013476
crossref_primary_10_1038_s42256_024_00925_4
crossref_primary_10_1111_cogs_13312
crossref_primary_10_1016_j_nlp_2025_100163
crossref_primary_10_1016_j_isci_2024_111401
crossref_primary_10_3390_e24050665
crossref_primary_10_32604_cmes_2023_030512
crossref_primary_10_1145_3766067
crossref_primary_10_1016_j_tics_2023_08_003
crossref_primary_10_1016_j_inffus_2025_103349
crossref_primary_10_1016_j_neuron_2024_06_025
crossref_primary_10_1038_s41467_024_51887_5
crossref_primary_10_1016_j_tics_2024_01_011
crossref_primary_10_1038_s42003_023_05040_5
crossref_primary_10_7554_eLife_89311_3
crossref_primary_10_1145_3735632
crossref_primary_10_1016_j_cognition_2024_105971
crossref_primary_10_1038_s41467_024_44801_6
crossref_primary_10_1038_s41598_022_20460_9
crossref_primary_10_3389_frai_2025_1509338
crossref_primary_10_1016_j_isci_2025_112229
crossref_primary_10_1088_1741_2552_ad593c
crossref_primary_10_3390_arts11050083
crossref_primary_10_1038_s41467_025_62306_8
crossref_primary_10_1162_nol_a_00101
crossref_primary_10_1162_imag_a_00072
crossref_primary_10_7554_eLife_89311
crossref_primary_10_1038_s41467_024_49173_5
crossref_primary_10_1007_s12671_024_02316_7
crossref_primary_10_1038_s41467_025_58620_w
crossref_primary_10_1038_s43588_022_00354_6
crossref_primary_10_1038_s41467_024_46631_y
crossref_primary_10_1093_cercor_bhae155
crossref_primary_10_1093_pnasnexus_pgad161
crossref_primary_10_1088_1741_2552_addd4a
crossref_primary_10_1007_s10462_023_10401_x
crossref_primary_10_1016_j_neuroimage_2022_119723
crossref_primary_10_1038_s41598_024_84530_w
crossref_primary_10_1038_s41597_025_04951_8
crossref_primary_10_1016_j_asoc_2024_111443
crossref_primary_10_1007_s10670_024_00909_1
crossref_primary_10_1016_j_cortex_2022_08_002
crossref_primary_10_3389_fnins_2024_1372257
crossref_primary_10_1038_s41539_025_00337_y
crossref_primary_10_1073_pnas_2219150120
crossref_primary_10_1016_j_cub_2025_01_024
crossref_primary_10_1111_tops_12771
crossref_primary_10_1007_s10579_025_09843_2
crossref_primary_10_1162_imag_a_00170
crossref_primary_10_3758_s13428_024_02569_z
crossref_primary_10_1038_s42003_024_05909_z
crossref_primary_10_1162_imag_a_00575
crossref_primary_10_1038_s42256_023_00714_5
crossref_primary_10_1038_s42256_023_00718_1
crossref_primary_10_1080_23273798_2023_2171072
crossref_primary_10_1093_scan_nsae032
crossref_primary_10_1016_j_device_2024_100411
crossref_primary_10_1038_s41562_023_01783_7
crossref_primary_10_3758_s13428_025_02774_4
crossref_primary_10_1016_j_neuroimage_2023_119980
crossref_primary_10_1016_j_tins_2022_12_008
crossref_primary_10_1063_5_0134156
crossref_primary_10_1371_journal_pbio_3002046
crossref_primary_10_1162_nol_a_00137
crossref_primary_10_1038_s42003_025_08706_4
crossref_primary_10_1016_j_eswa_2023_122700
crossref_primary_10_1111_cogs_70066
crossref_primary_10_1146_annurev_neuro_120623_101142
crossref_primary_10_1523_JNEUROSCI_0288_24_2024
crossref_primary_10_1016_j_conb_2024_102969
crossref_primary_10_1038_s42003_025_07731_7
crossref_primary_10_1111_cogs_13388
crossref_primary_10_1016_j_bandl_2024_105389
crossref_primary_10_1111_cogs_13386
crossref_primary_10_1016_j_inffus_2025_103650
crossref_primary_10_1111_mila_12523
crossref_primary_10_1016_j_jneumeth_2024_110347
crossref_primary_10_3390_biology12101330
crossref_primary_10_1109_TTS_2025_3556355
crossref_primary_10_1017_psa_2024_33
crossref_primary_10_3389_fnins_2022_928841
crossref_primary_10_1007_s11023_023_09622_4
crossref_primary_10_1038_s41598_024_69568_0
crossref_primary_10_1038_s41562_022_01516_2
Cites_doi 10.1101/407007
10.1016/j.tins.2020.03.003
10.1101/2020.12.03.410399
10.3389/neuro.06.004.2008
10.32470/CCN.2018.1237-0
10.1016/j.neuroimage.2010.06.010
10.1038/s41592-018-0235-4
10.1145/502512.502546
10.1523/JNEUROSCI.5023-14.2015
10.1016/j.cub.2018.10.042
10.18653/v1/W18-5413
10.18653/v1/D16-1064
10.1016/j.cognition.2020.104348
10.1162/tacl_a_00051
10.1023/A:1010933404324
10.1162/nol_a_00003
10.3115/v1/D14-1162
10.1038/nrn2113
10.1111/j.1749-6632.2010.05444.x
10.1101/2021.04.20.440622
10.1038/s41583-020-00395-8
10.1016/j.neuron.2018.03.044
10.1038/nn.4244
10.3389/fninf.2014.00014
10.1038/nature17637
10.18653/v1/P18-1254
10.1016/j.neuroimage.2007.04.042
10.1111/cogs.12445
10.1038/nature12935
10.1073/pnas.1719397115
10.1017/CBO9780511791222
10.1038/nrn2787
10.1016/j.tics.2014.01.002
10.1016/j.neuroimage.2012.01.021
10.1016/j.tics.2011.04.003
10.3115/1557769.1557821
10.1152/jn.00032.2010
10.1146/annurev-vision-082114-035447
10.1007/978-3-030-04182-3_1
10.1073/pnas.1403112111
10.1073/pnas.1612132113
10.18653/v1/W18-0107
10.1038/s41562-020-00982-w
10.1038/s41467-020-14578-5
10.1073/pnas.2105646118
10.18653/v1/P19-1472
10.1101/2020.12.02.408765
10.1093/cercor/bhy110
10.3115/v1/D14-1030
10.1016/j.neuron.2018.10.003
10.31219/osf.io/fq6gd
10.1371/journal.pcbi.1003915
10.1038/s41597-019-0020-y
10.1016/j.neuroimage.2005.06.058
10.1101/2020.12.02.403477
10.1016/j.neuroimage.2016.10.001
10.1126/science.aax0289
10.1073/pnas.1907367117
10.1126/science.1152876
10.7554/eLife.64972
10.1016/j.neuroimage.2013.10.027
10.1371/journal.pcbi.1003963
10.18653/v1/2021.findings-emnlp.308
10.1109/ICCV.2019.00481
10.1073/pnas.1018711108
10.1017/S0140525X16001837
ContentType Journal Article
Copyright The Author(s) 2022. corrected publication 2023
2022. The Author(s).
The Author(s) 2022. corrected publication 2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Distributed under a Creative Commons Attribution 4.0 International License
The Author(s) 2022
Copyright_xml – notice: The Author(s) 2022. corrected publication 2023
– notice: 2022. The Author(s).
– notice: The Author(s) 2022. corrected publication 2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
– notice: Distributed under a Creative Commons Attribution 4.0 International License
– notice: The Author(s) 2022
DBID C6C
AAYXX
CITATION
CGR
CUY
CVF
ECM
EIF
NPM
3V.
7XB
88I
8FE
8FH
8FK
ABUWG
AFKRA
AZQEC
BBNVY
BENPR
BHPHI
CCPQU
DWQXO
GNUQQ
HCIFZ
LK8
M2P
M7P
PHGZM
PHGZT
PIMPY
PKEHL
PQEST
PQGLB
PQQKQ
PQUKI
PRINS
Q9U
7X8
1XC
VOOES
5PM
DOA
DOI 10.1038/s42003-022-03036-1
DatabaseName Springer Nature OA Free Journals
CrossRef
Medline
MEDLINE
MEDLINE (Ovid)
MEDLINE
MEDLINE
PubMed
ProQuest Central (Corporate)
ProQuest Central (purchase pre-March 2016)
Science Database (Alumni Edition)
ProQuest SciTech Collection
ProQuest Natural Science Collection
ProQuest Central (Alumni) (purchase pre-March 2016)
ProQuest Central (Alumni)
ProQuest Central UK/Ireland
ProQuest Central Essentials
Biological Science Collection
ProQuest Central
Natural Science Collection
ProQuest One Community College
ProQuest Central
ProQuest Central Student
SciTech Premium Collection
Biological Sciences
Science Database
Biological Science Database
ProQuest Central Premium
ProQuest One Academic
Publicly Available Content Database
ProQuest One Academic Middle East (New)
ProQuest One Academic Eastern Edition (DO NOT USE)
ProQuest One Applied & Life Sciences
ProQuest One Academic (retired)
ProQuest One Academic UKI Edition
ProQuest Central China
ProQuest Central Basic
MEDLINE - Academic
Hyper Article en Ligne (HAL)
Hyper Article en Ligne (HAL) (Open Access)
PubMed Central (Full Participant titles)
DOAJ Directory of Open Access Journals
DatabaseTitle CrossRef
MEDLINE
Medline Complete
MEDLINE with Full Text
PubMed
MEDLINE (Ovid)
Publicly Available Content Database
ProQuest Central Student
ProQuest One Academic Middle East (New)
ProQuest Central Essentials
ProQuest Central (Alumni Edition)
SciTech Premium Collection
ProQuest One Community College
ProQuest Natural Science Collection
ProQuest Central China
ProQuest Central
ProQuest One Applied & Life Sciences
Natural Science Collection
ProQuest Central Korea
Biological Science Collection
ProQuest Central (New)
ProQuest Science Journals (Alumni Edition)
ProQuest Biological Science Collection
ProQuest Central Basic
ProQuest Science Journals
ProQuest One Academic Eastern Edition
Biological Science Database
ProQuest SciTech Collection
ProQuest One Academic UKI Edition
ProQuest One Academic
ProQuest One Academic (New)
ProQuest Central (Alumni)
MEDLINE - Academic
DatabaseTitleList
CrossRef


MEDLINE

MEDLINE - Academic
Publicly Available Content Database
Database_xml – sequence: 1
  dbid: DOA
  name: DOAJ Directory of Open Access Journals
  url: https://www.doaj.org/
  sourceTypes: Open Website
– sequence: 2
  dbid: NPM
  name: PubMed
  url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 3
  dbid: PIMPY
  name: Publicly Available Content Database
  url: http://search.proquest.com/publiccontent
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Biology
Computer Science
EISSN 2399-3642
EndPage 10
ExternalDocumentID oai_doaj_org_article_6f8ac33012674bcab4e30674755ccd23
PMC8850612
oai:HAL:hal-03361439v1
35173264
10_1038_s42003_022_03036_1
Genre Research Support, Non-U.S. Gov't
Journal Article
GroupedDBID 0R~
53G
88I
AAJSJ
ABDBF
ABUWG
ACGFS
ACSMW
ACUHS
ADBBV
AFKRA
AJTQC
ALMA_UNASSIGNED_HOLDINGS
AOIJS
AZQEC
BBNVY
BCNDV
BENPR
BHPHI
C6C
CCPQU
DWQXO
EBLON
EBS
GNUQQ
GROUPED_DOAJ
HCIFZ
HYE
M2P
M7P
M~E
NAO
O9-
OK1
PGMZT
PIMPY
RNT
RPM
SNYQT
AASML
AAYXX
AFFHD
CITATION
PHGZM
PHGZT
PQGLB
CGR
CUY
CVF
ECM
EIF
NPM
3V.
7XB
8FE
8FH
8FK
AARCD
LK8
PKEHL
PQEST
PQQKQ
PQUKI
PRINS
Q9U
7X8
1XC
VOOES
5PM
ID FETCH-LOGICAL-c640t-467c4fb5b7764e62edab0e49440df2b86ff5f204712be7ebdc4c00012f2f4a443
IEDL.DBID BENPR
ISICitedReferencesCount 144
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000756886800004&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 2399-3642
IngestDate Fri Oct 03 12:50:41 EDT 2025
Tue Nov 04 01:55:37 EST 2025
Tue Oct 14 20:28:40 EDT 2025
Sun Nov 09 09:03:32 EST 2025
Wed Aug 13 06:31:23 EDT 2025
Thu Jan 02 22:41:17 EST 2025
Sat Nov 29 04:16:55 EST 2025
Tue Nov 18 21:30:30 EST 2025
Fri Feb 21 02:40:06 EST 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 1
Keywords Magneto-encephalography
Encoding
Functional Magnetic Resonance Imaging
Natural Language Processing
Language English
License 2022. The Author(s).
Distributed under a Creative Commons Attribution 4.0 International License: http://creativecommons.org/licenses/by/4.0
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c640t-467c4fb5b7764e62edab0e49440df2b86ff5f204712be7ebdc4c00012f2f4a443
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ORCID 0000-0002-2121-170X
0000-0002-3965-3143
OpenAccessLink https://www.proquest.com/docview/2629167450?pq-origsite=%requestingapplication%
PMID 35173264
PQID 2629167450
PQPubID 4669726
PageCount 10
ParticipantIDs doaj_primary_oai_doaj_org_article_6f8ac33012674bcab4e30674755ccd23
pubmedcentral_primary_oai_pubmedcentral_nih_gov_8850612
hal_primary_oai_HAL_hal_03361439v1
proquest_miscellaneous_2629860678
proquest_journals_2629167450
pubmed_primary_35173264
crossref_citationtrail_10_1038_s42003_022_03036_1
crossref_primary_10_1038_s42003_022_03036_1
springer_journals_10_1038_s42003_022_03036_1
PublicationCentury 2000
PublicationDate 2022-02-16
PublicationDateYYYYMMDD 2022-02-16
PublicationDate_xml – month: 02
  year: 2022
  text: 2022-02-16
  day: 16
PublicationDecade 2020
PublicationPlace London
PublicationPlace_xml – name: London
– name: England
PublicationTitle Communications biology
PublicationTitleAbbrev Commun Biol
PublicationTitleAlternate Commun Biol
PublicationYear 2022
Publisher Nature Publishing Group UK
Nature Publishing Group
Nature Portfolio
Publisher_xml – name: Nature Publishing Group UK
– name: Nature Publishing Group
– name: Nature Portfolio
References RadfordALanguage models are unsupervised multitask learnersOpenAI Blog201919
DestrieuxCFischlBDaleAHalgrenEAutomatic parcellation of human cortical gyri and sulci using standard anatomical nomenclatureNeuroimage20105311510.1016/j.neuroimage.2010.06.01020547229
Nastase, S. A. et al. Narratives: fmri data for evaluating models of naturalistic language comprehension. Trends in neurosciences43, 271–273 (2020).
CadieuCFDeep neural networks rival the representation of primate it cortex for core visual object recognitionPLoS Comput. Biol.201410e100396310.1371/journal.pcbi.1003963255212944270441
Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S. & Dean, J. Distributed Representations of Words and Phrases and their Compositionality. Advances in Neural Information Processing Systems 3111–3119 (MIT Press, 2013).
Marcus, G. Deep learning: a critical appraisal. Preprint at https://arXiv.org/1801.00631 (2018).
Bojanowski, P., Grave, E., Joulin, A. & Mikolov, T. Enriching Word Vectors with Subword Information. In Transactions of the Association for Computational Linguistics (2016).
Mitchell, T. M. et al. Predicting human brain activity associated with the meanings of nouns. Science320, 1191–1195 (2008).
KingJ-RDehaeneSCharacterizing the dynamics of mental representations: the temporal generalization methodTrends Cogn. Sci.20141820321010.1016/j.tics.2014.01.002245939825635958
KellAJEYaminsDLKShookENNorman-HaignereSVMcDermottJHA task-optimized neural network replicates human auditory behavior, predicts brain responses, and reveals a cortical processing hierarchyNeuron201898630–64410.1016/j.neuron.2018.03.044
Frankle, J. & Carbin, M. The lottery ticket hypothesis: finding sparse, trainable neural networks. arXiv preprint arXiv:1803.03635 (2018).
DehaeneSCohenLThe unique role of the visual word form area in readingTrends Cogn. Sci.20111525426210.1016/j.tics.2011.04.00321592844
Reddy Oota, S., Manwani, N. & Raju S, B. fMRI semantic category decoding using linguistic encoding of word embeddings. In International Conference on Neural Information Processing (Springer, Cham, 2018).
Attardi, G. Wikiextractor. https://github.com/attardi/wikiextractor (2015).
Caucheteux, C., Gramfort, A. & King, J.-R. Disentangling syntax and semantics in the brain with deep networks. ICML 2021-38th International Conference on Machine Learning (2021).
LeeCSAlyMBaldassanoCAnticipation of temporally structured events in the braineLife202110e649721:CAS:528:DC%2BB3MXitlOgs7bP10.7554/eLife.64972338849538169103
Bengio, Y., Ducharme, R. & Vincent, P. in Advances in Neural Information Processing Systems (eds. Leen, T. K. et al.) vol. 13, 932–938 (MIT Press, 2003).
Kell, A., Yamins, D., Shook, E., Norman-Haignere, S. & McDermott, J. A task-optimized neural network replicates human auditory behavior, predicts brain responses, and reveals a cortical processing hierarchy. Neuron98, 630–644 (2018).
Baek, J. et al. What is wrong with scene text recognition model comparisons? dataset and model analysis. In Proceedings of the IEEE International Conference on Computer Vision, 4715–4723 https://github.com/clovaai/deep-text-recognition-benchmark (2019).
EstebanOfmriprep: a robust preprocessing pipeline for functional mriNat. Methods2019161111161:CAS:528:DC%2BC1cXisVyhurnN10.1038/s41592-018-0235-430532080
Gauthier, J. & Ivanova, A. Does the brain represent words? an evaluation of brain decoding studies of language understanding. Preprint at https://arXiv.org/1806.00591 (2018).
Kriegeskorte, N., Mur, M. & Bandettini, P. A. Representational similarity analysis—connecting the branches of systems neuroscience. Front. Syst. Neurosci.2, 4 (2008).
Reddy, A. J. & Wehbe, L. Syntactic representations in the human brain: beyond effort-based metrics. Preprint at bioRXiv (2021).
PallierCDevauchelleA-DDehaeneSCortical representation of the constituent structure of sentencesProc. Natl Acad. Sci.2011108252225271:CAS:528:DC%2BC3MXitFWns7o%3D10.1073/pnas.1018711108212244153038732
Schoffelen, J. -M. et al. A 204-subject multimodal neuroimaging dataset to study language processing. Sci. Data6, 1–13 (2019).
BrodbeckCHongLESimonJZRapid transformation from auditory to linguistic representations of continuous speechCurr. Biol.201828397639831:CAS:528:DC%2BC1cXitlKmu7jF10.1016/j.cub.2018.10.042305036206339854
Heilbron, M., Armeni, K., Schoffelen, J.-M., Hagoort, P. & de Lange, F. P. A hierarchy of linguistic predictions during natural language comprehension. bioRxiv https://doi.org/10.1101/2020.12.03.410399 (2020).
Schrimpf, M. et al. Brain-score: which artificial neural network for object recognition is most brain-like? Preprint at bioRXiv (2018).
Heilbron, M., Armeni, K., Schoffelen, J.-M., Hagoort, P. & de Lange, F. P. A hierarchy of linguistic predictions during natural language comprehension. Preprint at bioRXiv (2020).
Jain, S. & Huth, A. in Advances in Neural Information Processing Systems (eds Bengio, S. et al.) vol. 31, 6628–6637 (Curran Associates, Inc., 2018).
Woolnough, O. et al. Spatiotemporal dynamics of orthographic and lexical processing in the ventral visual pathway. Nat. Hum. Behav.5, 389–398 (2021).
CoganGBSensory–motor transformations for speech occur bilaterallyNature201450794981:CAS:528:DC%2BC2cXjs1Ght7o%3D10.1038/nature12935244295204000028
Hale, J., Dyer, C., Kuncoro, A. & Brennan, J. R. Finding syntax in human encephalography with beam search. Preprint at https://arXiv.org/1806.04127 (2018).
Huth, A. G., de Heer, W. A., Griffiths, T. L., Theunissen, F. E. & Gallant, J. L. Natural speech reveals the semantic maps that tile human cerebral cortex. Nature532, 453–458 (2016).
Fedorenko, E., Blank, I., Siegelman, M. & Mineroff, Z. Lack of selectivity for syntax relative to word meanings throughout the language network. Cognition203, 104348 (2020).
Toneva, M. & Wehbe, L. Interpreting and improving natural-language processing (in machines) with natural language-processing (in the brain). Advances in Neural Information Processing Systems 32 (2019).
YaminsDLPerformance-optimized hierarchical models predict neural responses in higher visual cortexProc. Natl Acad. Sci.2014111861986241:CAS:528:DC%2BC2cXnslWnsb4%3D10.1073/pnas.1403112111248121274060707
AbrahamAMachine learning for neuroimaging with scikit-learnFront. Neuroinform.201481410.3389/fninf.2014.00014246003883930868
KriegeskorteNDeep neural networks: a new framework for modeling biological vision and brain information processingAnnu. Rev. Vis. Sci.2015141744610.1146/annurev-vision-082114-03544728532370
FischlBFreesurferNeuroimage20126277478110.1016/j.neuroimage.2012.01.02122248573
Lakretz, Y. et al. The emergence of number and syntax units in LSTM language models. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) (2019).
Devlin, J., Chang, M.-W., Lee, K. & Toutanova, K. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL-HLT (2019).
Millet, J. & King, J.-R. Inductive biases, pretraining and fine-tuning jointly account for brain responses to speech. Preprint at https://arXiv.org/2103.01032 [cs, eess, q-bio] (2021).
Hale, J. T. et al. Neuro-computational models of language processing.
Saxe, A., Nelli, S. & Summerfield, C. If deep learning is the answer, what is the question? Nat. Rev. Neurosci.22, 1–13 (2020).
Hickok, G. & Poeppel, D. The Cortical Organization of Speech Processing vol. 8, 393–402 (Nature Publishing Group, 2007).
TangHRecurrent computations for visual pattern completionProc. Natl Acad. Sci.2018115883588401:CAS:528:DC%2BC1cXitVCkur%2FJ10.1073/pnas.1719397115301043636126774
Van EssenDCA population-average, landmark-and surface-based (pals) atlas of human cerebral cortexNeuroimage20052863566210.1016/j.neuroimage.2005.06.05816172003
Caucheteux, C., Gramfort, A. & King, J.-R. GPT-2’s Activations Predict the Degree of Semantic Comprehension in the Human Brain (Cold Spring Harbor Laboratory Section: New Results, 2021).
Zellers, R., Holtzman, A., Bisk, Y., Farhadi, A. & Choi, Y. Hellaswag: can a machine really finish your sentence? Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (2019).
Lake, B. M., Ullman, T. D., Tenenbaum, J. B. & Gershman, S. J. Building machines that learn and think like people. Behavioral and brain sciences 40 (2017).
Pennington, J., Socher, R. & Manning, C. D. Glove: global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP) Conference 1532–1543 (2014).
Manning, C. D., Clark, K., Hewitt, J., Khandelwal, U. & Levy, O. Emergent linguistic structure in artificial neural networks trained by self-supervision. Proc. Natl Acad. Sci.117, 30046–30054 (2020).
Abnar, S., Ahmed, R., Mijnheer, M. & Zuidema, W. H. Experiential, distributional and dependency-based word embeddings have complementary roles in decoding brain activity. In Proceedings of the 8th Workshop on Cognitive Modeling and Computational Linguistics (CMCL 2018), (2018).
YaminsDLDiCarloJJUsing goal-driven deep learning models to understand sensory cortexNat. Neurosci.2016193561:CAS:528:DC%2BC28XjtVOjt7k%3D10.1038/nn.424426906502
Dehaene, S., Yann, L. & Girardon, J. La plus belle histoire de l’intelligence: des origines aux neurones artificiels: vers une nouvelle étape de l’évolution (Robert Laffont, 2018).
Devlin, J., Chang, M., Lee, K. & Toutanova, K. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) (2019).
Sassenhagen, J. & Fiebach, C. J. Traces of meaning itself: Encoding distributional word vectors in brain activity. Neurobiology of Language 1.1, 54–76 (2020).
Athanasiou, N., Iosif, E. & Potamianos, A. Neural activation semantic models: computational lexical semantic models of localized neural activations. In Proceedings of the 27
S Dehaene (3036_CR39) 2011; 15
3036_CR13
A Radford (3036_CR82) 2019; 1
3036_CR14
M Eickenberg (3036_CR54) 2017; 152
3036_CR15
3036_CR59
3036_CR16
3036_CR17
3036_CR18
3036_CR19
C Destrieux (3036_CR88) 2010; 53
3036_CR93
DL Yamins (3036_CR55) 2016; 19
3036_CR94
3036_CR95
3036_CR10
3036_CR11
3036_CR12
3036_CR56
N Kriegeskorte (3036_CR52) 2015; 1
F Pedregosa (3036_CR49) 2011; 12
S-M Khaligh-Razavi (3036_CR51) 2014; 10
3036_CR46
AJE Kell (3036_CR71) 2018; 98
JR Brennan (3036_CR35) 2017; 41
3036_CR47
B Fischl (3036_CR86) 2012; 62
3036_CR48
CJ Price (3036_CR64) 2010; 1191
3036_CR83
3036_CR84
3036_CR41
CF Cadieu (3036_CR58) 2014; 10
3036_CR85
3036_CR44
3036_CR45
J-R King (3036_CR60) 2014; 18
DC Van Essen (3036_CR87) 2005; 28
3036_CR80
3036_CR81
C Pallier (3036_CR65) 2011; 108
3036_CR79
3036_CR36
3036_CR37
E Fedorenko (3036_CR43) 2016; 113
3036_CR38
GB Keller (3036_CR73) 2018; 100
DL Yamins (3036_CR42) 2014; 111
3036_CR1
3036_CR72
3036_CR30
3036_CR74
3036_CR5
3036_CR31
3036_CR75
3036_CR4
3036_CR32
3036_CR3
C Brodbeck (3036_CR24) 2018; 28
3036_CR33
3036_CR2
3036_CR34
3036_CR78
3036_CR9
3036_CR8
P Hagoort (3036_CR40) 2019; 366
3036_CR7
A Abraham (3036_CR91) 2014; 8
3036_CR6
3036_CR70
CS Lee (3036_CR76) 2021; 10
K Friston (3036_CR77) 2010; 11
H Tang (3036_CR50) 2018; 115
U Güçlü (3036_CR53) 2015; 35
3036_CR68
3036_CR25
3036_CR69
3036_CR26
D Hermes (3036_CR62) 2017; 27
3036_CR27
M Minsky (3036_CR57) 1969
O Esteban (3036_CR89) 2019; 16
3036_CR28
Y Behzadi (3036_CR90) 2007; 37
3036_CR29
E Fedorenko (3036_CR66) 2010; 104
U Cohen (3036_CR61) 2020; 11
3036_CR63
3036_CR20
A Gramfort (3036_CR92) 2014; 86
3036_CR21
3036_CR22
3036_CR23
GB Cogan (3036_CR67) 2014; 507
37041229 - Commun Biol. 2023 Apr 11;6(1):396. doi: 10.1038/s42003-023-04776-4
References_xml – reference: EickenbergMGramfortAVaroquauxGThirionBSeeing it all: convolutional network layers map the function of the human visual systemNeuroImage201715218419410.1016/j.neuroimage.2016.10.00127777172
– reference: YaminsDLDiCarloJJUsing goal-driven deep learning models to understand sensory cortexNat. Neurosci.2016193561:CAS:528:DC%2BC28XjtVOjt7k%3D10.1038/nn.424426906502
– reference: FristonKThe free-energy principle: a unified brain theory?Nat. Rev. Neurosci.2010111271381:CAS:528:DC%2BC3cXksFGktw%3D%3D10.1038/nrn278720068583
– reference: PallierCDevauchelleA-DDehaeneSCortical representation of the constituent structure of sentencesProc. Natl Acad. Sci.2011108252225271:CAS:528:DC%2BC3MXitFWns7o%3D10.1073/pnas.1018711108212244153038732
– reference: Mitchell, T. M. et al. Predicting human brain activity associated with the meanings of nouns. Science320, 1191–1195 (2008).
– reference: Baek, J. et al. What is wrong with scene text recognition model comparisons? dataset and model analysis. In Proceedings of the IEEE International Conference on Computer Vision, 4715–4723 https://github.com/clovaai/deep-text-recognition-benchmark (2019).
– reference: Millet, J. & King, J.-R. Inductive biases, pretraining and fine-tuning jointly account for brain responses to speech. Preprint at https://arXiv.org/2103.01032 [cs, eess, q-bio] (2021).
– reference: LeeCSAlyMBaldassanoCAnticipation of temporally structured events in the braineLife202110e649721:CAS:528:DC%2BB3MXitlOgs7bP10.7554/eLife.64972338849538169103
– reference: KellerGBMrsic-FlogelTDPredictive processing: a canonical cortical computationNeuron20181004244351:CAS:528:DC%2BC1cXitVSisrnF10.1016/j.neuron.2018.10.003303596066400266
– reference: Hale, J., Dyer, C., Kuncoro, A. & Brennan, J. R. Finding syntax in human encephalography with beam search. Preprint at https://arXiv.org/1806.04127 (2018).
– reference: AbrahamAMachine learning for neuroimaging with scikit-learnFront. Neuroinform.201481410.3389/fninf.2014.00014246003883930868
– reference: TangHRecurrent computations for visual pattern completionProc. Natl Acad. Sci.2018115883588401:CAS:528:DC%2BC1cXitVCkur%2FJ10.1073/pnas.1719397115301043636126774
– reference: PriceCJThe anatomy of language: a review of 100 fmri studies published in 2009Ann. N. Y. Acad. Sci.20101191628810.1111/j.1749-6632.2010.05444.x20392276
– reference: Schrimpf, M. et al. Brain-score: which artificial neural network for object recognition is most brain-like? Preprint at bioRXiv (2018).
– reference: Frankle, J. & Carbin, M. The lottery ticket hypothesis: finding sparse, trainable neural networks. arXiv preprint arXiv:1803.03635 (2018).
– reference: Sassenhagen, J. & Fiebach, C. J. Traces of meaning itself: Encoding distributional word vectors in brain activity. Neurobiology of Language 1.1, 54–76 (2020).
– reference: BrodbeckCHongLESimonJZRapid transformation from auditory to linguistic representations of continuous speechCurr. Biol.201828397639831:CAS:528:DC%2BC1cXitlKmu7jF10.1016/j.cub.2018.10.042305036206339854
– reference: Hale, J. T. et al. Neuro-computational models of language processing.
– reference: Wang, L. Dynamic predictive coding across the left fronto-temporal language hierarchy: evidence from MEG, EEG and fMRI29.
– reference: Reddy Oota, S., Manwani, N. & Raju S, B. fMRI semantic category decoding using linguistic encoding of word embeddings. In International Conference on Neural Information Processing (Springer, Cham, 2018).
– reference: Reddy, A. J. & Wehbe, L. Syntactic representations in the human brain: beyond effort-based metrics. Preprint at bioRXiv (2021).
– reference: Attardi, G. Wikiextractor. https://github.com/attardi/wikiextractor (2015).
– reference: Van EssenDCA population-average, landmark-and surface-based (pals) atlas of human cerebral cortexNeuroimage20052863566210.1016/j.neuroimage.2005.06.05816172003
– reference: Devlin, J., Chang, M.-W., Lee, K. & Toutanova, K. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL-HLT (2019).
– reference: BehzadiYRestomKLiauJLiuTTA component based noise correction method (compcor) for bold and perfusion based fmriNeuroimage2007379010110.1016/j.neuroimage.2007.04.04217560126
– reference: Vaswani, A. et al. Attention is all you need. In Proceedings on NIPS (Cornell University, 2017).
– reference: Bojanowski, P., Grave, E., Joulin, A. & Mikolov, T. Enriching Word Vectors with Subword Information. In Transactions of the Association for Computational Linguistics (2016).
– reference: PedregosaFScikit-learn: machine learning in pythonJ. Mach. Learn. Res.20111228252830
– reference: Marcus, G. Deep learning: a critical appraisal. Preprint at https://arXiv.org/1801.00631 (2018).
– reference: Wehbe, L., Vaswani, A., Knight, K. & Mitchell, T. Aligning context-based statistical models of language with brain activity during reading. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP) 233–243 (Association for Computational Linguistics, 2014).
– reference: Bingham, E. & Mannila, H. Random projection in dimensionality reduction: applications to image and text data. In Proceedings of the Seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 245–250 (ACM, 2001).
– reference: Caucheteux, C., Gramfort, A. & King, J.-R. GPT-2’s Activations Predict the Degree of Semantic Comprehension in the Human Brain (Cold Spring Harbor Laboratory Section: New Results, 2021).
– reference: Seydell-Greenwald, A., Wang, X., Newport, E., Bi, Y. & Striem-Amit, E. Spoken language comprehension activates the primary visual cortex. Preprint at bioRxiv (2020).
– reference: KingJ-RDehaeneSCharacterizing the dynamics of mental representations: the temporal generalization methodTrends Cogn. Sci.20141820321010.1016/j.tics.2014.01.002245939825635958
– reference: Kell, A., Yamins, D., Shook, E., Norman-Haignere, S. & McDermott, J. A task-optimized neural network replicates human auditory behavior, predicts brain responses, and reveals a cortical processing hierarchy. Neuron98, 630–644 (2018).
– reference: Abnar, S., Ahmed, R., Mijnheer, M. & Zuidema, W. H. Experiential, distributional and dependency-based word embeddings have complementary roles in decoding brain activity. In Proceedings of the 8th Workshop on Cognitive Modeling and Computational Linguistics (CMCL 2018), (2018).
– reference: Schoffelen, J. -M. et al. A 204-subject multimodal neuroimaging dataset to study language processing. Sci. Data6, 1–13 (2019).
– reference: Heilbron, M., Armeni, K., Schoffelen, J.-M., Hagoort, P. & de Lange, F. P. A hierarchy of linguistic predictions during natural language comprehension. Preprint at bioRXiv (2020).
– reference: GüçlüUvan GervenMADeep neural networks reveal a gradient in the complexity of neural representations across the ventral streamJ. Neurosci.201535100051001410.1523/JNEUROSCI.5023-14.2015261570006605414
– reference: FischlBFreesurferNeuroimage20126277478110.1016/j.neuroimage.2012.01.02122248573
– reference: YaminsDLPerformance-optimized hierarchical models predict neural responses in higher visual cortexProc. Natl Acad. Sci.2014111861986241:CAS:528:DC%2BC2cXnslWnsb4%3D10.1073/pnas.1403112111248121274060707
– reference: Saxe, A., Nelli, S. & Summerfield, C. If deep learning is the answer, what is the question? Nat. Rev. Neurosci.22, 1–13 (2020).
– reference: CoganGBSensory–motor transformations for speech occur bilaterallyNature201450794981:CAS:528:DC%2BC2cXjs1Ght7o%3D10.1038/nature12935244295204000028
– reference: Zellers, R., Holtzman, A., Bisk, Y., Farhadi, A. & Choi, Y. Hellaswag: can a machine really finish your sentence? Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (2019).
– reference: Kriegeskorte, N., Mur, M. & Bandettini, P. A. Representational similarity analysis—connecting the branches of systems neuroscience. Front. Syst. Neurosci.2, 4 (2008).
– reference: Turing, A. M. Parsing the Turing Test 23–65 (Springer, 2009).
– reference: Devlin, J., Chang, M., Lee, K. & Toutanova, K. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) (2019).
– reference: Brown, T. B. et al. Language models are few-shot learners. In Advances in Neural Information Processing Systems (2020).
– reference: Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S. & Dean, J. Distributed Representations of Words and Phrases and their Compositionality. Advances in Neural Information Processing Systems 3111–3119 (MIT Press, 2013).
– reference: Koehn, P. et al. Moses: ppen source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions 177–180 (Association for Computational Linguistics, 2007).
– reference: Goldstein, A. et al. Thinking ahead: prediction in context as a keystone of language in humans and machines. Preprint at bioRxiv (2020).
– reference: KellAJEYaminsDLKShookENNorman-HaignereSVMcDermottJHA task-optimized neural network replicates human auditory behavior, predicts brain responses, and reveals a cortical processing hierarchyNeuron201898630–64410.1016/j.neuron.2018.03.044
– reference: Khaligh-RazaviS-MKriegeskorteNDeep supervised, but not unsupervised, models may explain it cortical representationPLoS Comput. Biol.201410e100391510.1371/journal.pcbi.1003915253751364222664
– reference: Chomsky, N. Language and Mind (Cambridge University Press, 2006).
– reference: MinskyMPapertSPerceptrons: An Introduction to Computational Geometry1969MIT Press
– reference: Athanasiou, N., Iosif, E. & Potamianos, A. Neural activation semantic models: computational lexical semantic models of localized neural activations. In Proceedings of the 27th International Conference on Computational Linguistics 2867–2878 (Association for Computational Linguistics, 2018).
– reference: DestrieuxCFischlBDaleAHalgrenEAutomatic parcellation of human cortical gyri and sulci using standard anatomical nomenclatureNeuroimage20105311510.1016/j.neuroimage.2010.06.01020547229
– reference: HermesDElectrophysiological responses in the ventral temporal cortex during reading of numerals and calculationCereb. Cortex20172756757526503267
– reference: Huth, A. G., de Heer, W. A., Griffiths, T. L., Theunissen, F. E. & Gallant, J. L. Natural speech reveals the semantic maps that tile human cerebral cortex. Nature532, 453–458 (2016).
– reference: Toneva, M. & Wehbe, L. Interpreting and improving natural-language processing (in machines) with natural language-processing (in the brain). Advances in Neural Information Processing Systems 32 (2019).
– reference: Jain, S. & Huth, A. in Advances in Neural Information Processing Systems (eds Bengio, S. et al.) vol. 31, 6628–6637 (Curran Associates, Inc., 2018).
– reference: Loula, J., Baroni, M. & Lake, B. M. Rearranging the Familiar: Testing Compositional Generalization in Recurrent Networks. In BlackboxNLP@ EMNLP (2018).
– reference: Bengio, Y., Ducharme, R. & Vincent, P. in Advances in Neural Information Processing Systems (eds. Leen, T. K. et al.) vol. 13, 932–938 (MIT Press, 2003).
– reference: BrennanJRPylkkänenLMeg evidence for incremental sentence composition in the anterior temporal lobeCogn. Sci.2017411515153110.1111/cogs.1244527813182
– reference: HagoortPThe neurobiology of language beyond single-word processingScience201936655581:CAS:528:DC%2BC1MXhvFWlurzM10.1126/science.aax028931604301
– reference: CohenUChungSLeeDDSompolinskyHSeparability and geometry of object manifolds in deep neural networksNat. Commun.20201111310.1038/s41467-020-14578-5
– reference: EstebanOfmriprep: a robust preprocessing pipeline for functional mriNat. Methods2019161111161:CAS:528:DC%2BC1cXisVyhurnN10.1038/s41592-018-0235-430532080
– reference: Woolnough, O. et al. Spatiotemporal dynamics of orthographic and lexical processing in the ventral visual pathway. Nat. Hum. Behav.5, 389–398 (2021).
– reference: Ruan, Y. -P., Ling, Z. -H. & Hu, Y. Exploring semantic representation in brain activity using word embeddings. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, 669–679 (Association for Computational Linguistics, 2016).
– reference: Lakretz, Y. et al. The emergence of number and syntax units in LSTM language models. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) (2019).
– reference: Dehaene, S., Yann, L. & Girardon, J. La plus belle histoire de l’intelligence: des origines aux neurones artificiels: vers une nouvelle étape de l’évolution (Robert Laffont, 2018).
– reference: Gauthier, J. & Ivanova, A. Does the brain represent words? an evaluation of brain decoding studies of language understanding. Preprint at https://arXiv.org/1806.00591 (2018).
– reference: Schrimpf, M. et al. The neural architecture of language: Integrative modeling converges on predictive processing. In Proceedings of the National Academy of Sciences (2021).
– reference: Manning, C. D., Clark, K., Hewitt, J., Khandelwal, U. & Levy, O. Emergent linguistic structure in artificial neural networks trained by self-supervision. Proc. Natl Acad. Sci.117, 30046–30054 (2020).
– reference: Pennington, J., Socher, R. & Manning, C. D. Glove: global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP) Conference 1532–1543 (2014).
– reference: Caucheteux, C., Gramfort, A. & King, J.-R. Model-based analysis of brain activity reveals the hierarchy of language in 305 subjects. In EMNLP 2021—Conference on Empirical Methods in Natural Language Processing (2021).
– reference: Heilbron, M., Armeni, K., Schoffelen, J.-M., Hagoort, P. & de Lange, F. P. A hierarchy of linguistic predictions during natural language comprehension. bioRxiv https://doi.org/10.1101/2020.12.03.410399 (2020).
– reference: CadieuCFDeep neural networks rival the representation of primate it cortex for core visual object recognitionPLoS Comput. Biol.201410e100396310.1371/journal.pcbi.1003963255212944270441
– reference: Hickok, G. & Poeppel, D. The Cortical Organization of Speech Processing vol. 8, 393–402 (Nature Publishing Group, 2007).
– reference: Breiman, L. Random forests. Mach. Learn.45, 5–32 (2001).
– reference: Lample, G. & Conneau, A. Cross-lingual language model pretraining. In Adv. Neural Inf. Process. Syst. (2019).
– reference: Lake, B. M. & Murphy, G. L. Word meaning in minds and machines. Psychol. Rev. (2021).
– reference: Fedorenko, E., Blank, I., Siegelman, M. & Mineroff, Z. Lack of selectivity for syntax relative to word meanings throughout the language network. Cognition203, 104348 (2020).
– reference: FedorenkoENeural correlate of the construction of sentence meaningProc. Natl Acad. Sci.2016113E6256E62621:CAS:528:DC%2BC28XhsFKjsLzM10.1073/pnas.1612132113276716425068329
– reference: KriegeskorteNDeep neural networks: a new framework for modeling biological vision and brain information processingAnnu. Rev. Vis. Sci.2015141744610.1146/annurev-vision-082114-03544728532370
– reference: GramfortAMne software for processing meg and eeg dataNeuroImage20148644646010.1016/j.neuroimage.2013.10.02724161808
– reference: Caucheteux, C., Gramfort, A. & King, J.-R. Disentangling syntax and semantics in the brain with deep networks. ICML 2021-38th International Conference on Machine Learning (2021).
– reference: Nastase, S. A. et al. Narratives: fmri data for evaluating models of naturalistic language comprehension. Trends in neurosciences43, 271–273 (2020).
– reference: Anderson, A. J. et al. Multiple regions of a cortical network commonly encode the meaning of words in multiple grammatical positions of read sentences. Cereb. Cortex29, 2396–2411 (2019).
– reference: Mikolov, T., Chen, K., Corrado, G. & Dean, J. Efficient estimation of word representations in vector space. Preprint at https://arxiv.org/1301.3781 (2013).
– reference: Lake, B. M., Ullman, T. D., Tenenbaum, J. B. & Gershman, S. J. Building machines that learn and think like people. Behavioral and brain sciences 40 (2017).
– reference: FedorenkoEHsiehP-JNieto-CastañónAWhitfield-GabrieliSKanwisherNNew method for fmri investigations of language: defining rois functionally in individual subjectsJ. Neurophysiol.20101041177119410.1152/jn.00032.2010204103632934923
– reference: DehaeneSCohenLThe unique role of the visual word form area in readingTrends Cogn. Sci.20111525426210.1016/j.tics.2011.04.00321592844
– reference: Ramsauer, H. et al. Hopfield networks is all you need. Preprint at https://arXiv.org/2008.02217 [cs, stat] (2021).
– reference: RadfordALanguage models are unsupervised multitask learnersOpenAI Blog201919
– ident: 3036_CR12
– ident: 3036_CR7
– ident: 3036_CR70
  doi: 10.1101/407007
– ident: 3036_CR3
– ident: 3036_CR72
  doi: 10.1016/j.tins.2020.03.003
– ident: 3036_CR83
– ident: 3036_CR34
  doi: 10.1101/2020.12.03.410399
– ident: 3036_CR59
  doi: 10.3389/neuro.06.004.2008
– ident: 3036_CR25
  doi: 10.32470/CCN.2018.1237-0
– volume: 53
  start-page: 1
  year: 2010
  ident: 3036_CR88
  publication-title: Neuroimage
  doi: 10.1016/j.neuroimage.2010.06.010
– volume: 16
  start-page: 111
  year: 2019
  ident: 3036_CR89
  publication-title: Nat. Methods
  doi: 10.1038/s41592-018-0235-4
– ident: 3036_CR94
  doi: 10.1145/502512.502546
– ident: 3036_CR32
– volume: 35
  start-page: 10005
  year: 2015
  ident: 3036_CR53
  publication-title: J. Neurosci.
  doi: 10.1523/JNEUROSCI.5023-14.2015
– ident: 3036_CR93
– volume: 28
  start-page: 3976
  year: 2018
  ident: 3036_CR24
  publication-title: Curr. Biol.
  doi: 10.1016/j.cub.2018.10.042
– ident: 3036_CR78
– ident: 3036_CR8
– ident: 3036_CR9
  doi: 10.18653/v1/W18-5413
– ident: 3036_CR23
  doi: 10.18653/v1/D16-1064
– volume: 12
  start-page: 2825
  year: 2011
  ident: 3036_CR49
  publication-title: J. Mach. Learn. Res.
– ident: 3036_CR13
– ident: 3036_CR38
  doi: 10.1016/j.cognition.2020.104348
– ident: 3036_CR4
– ident: 3036_CR16
  doi: 10.1162/tacl_a_00051
– ident: 3036_CR48
  doi: 10.1023/A:1010933404324
– ident: 3036_CR19
  doi: 10.1162/nol_a_00003
– ident: 3036_CR69
– ident: 3036_CR15
  doi: 10.3115/v1/D14-1162
– ident: 3036_CR41
  doi: 10.1038/nrn2113
– volume: 1191
  start-page: 62
  year: 2010
  ident: 3036_CR64
  publication-title: Ann. N. Y. Acad. Sci.
  doi: 10.1111/j.1749-6632.2010.05444.x
– ident: 3036_CR20
  doi: 10.1101/2021.04.20.440622
– ident: 3036_CR75
– ident: 3036_CR56
  doi: 10.1038/s41583-020-00395-8
– ident: 3036_CR46
  doi: 10.1016/j.neuron.2018.03.044
– volume: 19
  start-page: 356
  year: 2016
  ident: 3036_CR55
  publication-title: Nat. Neurosci.
  doi: 10.1038/nn.4244
– volume-title: Perceptrons: An Introduction to Computational Geometry
  year: 1969
  ident: 3036_CR57
– volume: 8
  start-page: 14
  year: 2014
  ident: 3036_CR91
  publication-title: Front. Neuroinform.
  doi: 10.3389/fninf.2014.00014
– ident: 3036_CR44
  doi: 10.1038/nature17637
– ident: 3036_CR36
  doi: 10.18653/v1/P18-1254
– ident: 3036_CR14
– volume: 37
  start-page: 90
  year: 2007
  ident: 3036_CR90
  publication-title: Neuroimage
  doi: 10.1016/j.neuroimage.2007.04.042
– volume: 41
  start-page: 1515
  year: 2017
  ident: 3036_CR35
  publication-title: Cogn. Sci.
  doi: 10.1111/cogs.12445
– volume: 507
  start-page: 94
  year: 2014
  ident: 3036_CR67
  publication-title: Nature
  doi: 10.1038/nature12935
– volume: 115
  start-page: 8835
  year: 2018
  ident: 3036_CR50
  publication-title: Proc. Natl Acad. Sci.
  doi: 10.1073/pnas.1719397115
– ident: 3036_CR2
  doi: 10.1017/CBO9780511791222
– volume: 11
  start-page: 127
  year: 2010
  ident: 3036_CR77
  publication-title: Nat. Rev. Neurosci.
  doi: 10.1038/nrn2787
– ident: 3036_CR5
– ident: 3036_CR33
– volume: 18
  start-page: 203
  year: 2014
  ident: 3036_CR60
  publication-title: Trends Cogn. Sci.
  doi: 10.1016/j.tics.2014.01.002
– volume: 62
  start-page: 774
  year: 2012
  ident: 3036_CR86
  publication-title: Neuroimage
  doi: 10.1016/j.neuroimage.2012.01.021
– ident: 3036_CR10
– volume: 15
  start-page: 254
  year: 2011
  ident: 3036_CR39
  publication-title: Trends Cogn. Sci.
  doi: 10.1016/j.tics.2011.04.003
– ident: 3036_CR81
– ident: 3036_CR84
  doi: 10.3115/1557769.1557821
– volume: 104
  start-page: 1177
  year: 2010
  ident: 3036_CR66
  publication-title: J. Neurophysiol.
  doi: 10.1152/jn.00032.2010
– volume: 1
  start-page: 417
  year: 2015
  ident: 3036_CR52
  publication-title: Annu. Rev. Vis. Sci.
  doi: 10.1146/annurev-vision-082114-035447
– ident: 3036_CR21
  doi: 10.1007/978-3-030-04182-3_1
– volume: 111
  start-page: 8619
  year: 2014
  ident: 3036_CR42
  publication-title: Proc. Natl Acad. Sci.
  doi: 10.1073/pnas.1403112111
– volume: 113
  start-page: E6256
  year: 2016
  ident: 3036_CR43
  publication-title: Proc. Natl Acad. Sci.
  doi: 10.1073/pnas.1612132113
– ident: 3036_CR22
  doi: 10.18653/v1/W18-0107
– ident: 3036_CR63
  doi: 10.1038/s41562-020-00982-w
– ident: 3036_CR28
– volume: 11
  start-page: 1
  year: 2020
  ident: 3036_CR61
  publication-title: Nat. Commun.
  doi: 10.1038/s41467-020-14578-5
– ident: 3036_CR27
  doi: 10.1073/pnas.2105646118
– ident: 3036_CR80
  doi: 10.18653/v1/P19-1472
– volume: 1
  start-page: 9
  year: 2019
  ident: 3036_CR82
  publication-title: OpenAI Blog
– ident: 3036_CR45
  doi: 10.1101/2020.12.02.408765
– ident: 3036_CR18
  doi: 10.1093/cercor/bhy110
– ident: 3036_CR95
– ident: 3036_CR26
  doi: 10.3115/v1/D14-1030
– volume: 100
  start-page: 424
  year: 2018
  ident: 3036_CR73
  publication-title: Neuron
  doi: 10.1016/j.neuron.2018.10.003
– volume: 98
  start-page: 630–644
  year: 2018
  ident: 3036_CR71
  publication-title: Neuron
  doi: 10.1016/j.neuron.2018.03.044
– ident: 3036_CR11
– ident: 3036_CR47
  doi: 10.31219/osf.io/fq6gd
– ident: 3036_CR6
– volume: 10
  start-page: e1003915
  year: 2014
  ident: 3036_CR51
  publication-title: PLoS Comput. Biol.
  doi: 10.1371/journal.pcbi.1003915
– ident: 3036_CR37
  doi: 10.1038/s41597-019-0020-y
– volume: 28
  start-page: 635
  year: 2005
  ident: 3036_CR87
  publication-title: Neuroimage
  doi: 10.1016/j.neuroimage.2005.06.058
– ident: 3036_CR30
  doi: 10.1101/2020.12.02.403477
– volume: 152
  start-page: 184
  year: 2017
  ident: 3036_CR54
  publication-title: NeuroImage
  doi: 10.1016/j.neuroimage.2016.10.001
– volume: 366
  start-page: 55
  year: 2019
  ident: 3036_CR40
  publication-title: Science
  doi: 10.1126/science.aax0289
– ident: 3036_CR68
  doi: 10.1073/pnas.1907367117
– ident: 3036_CR1
– ident: 3036_CR17
  doi: 10.1126/science.1152876
– ident: 3036_CR31
– ident: 3036_CR74
  doi: 10.1101/2020.12.03.410399
– volume: 27
  start-page: 567
  year: 2017
  ident: 3036_CR62
  publication-title: Cereb. Cortex
– volume: 10
  start-page: e64972
  year: 2021
  ident: 3036_CR76
  publication-title: eLife
  doi: 10.7554/eLife.64972
– volume: 86
  start-page: 446
  year: 2014
  ident: 3036_CR92
  publication-title: NeuroImage
  doi: 10.1016/j.neuroimage.2013.10.027
– volume: 10
  start-page: e1003963
  year: 2014
  ident: 3036_CR58
  publication-title: PLoS Comput. Biol.
  doi: 10.1371/journal.pcbi.1003963
– ident: 3036_CR29
  doi: 10.18653/v1/2021.findings-emnlp.308
– ident: 3036_CR85
  doi: 10.1109/ICCV.2019.00481
– volume: 108
  start-page: 2522
  year: 2011
  ident: 3036_CR65
  publication-title: Proc. Natl Acad. Sci.
  doi: 10.1073/pnas.1018711108
– ident: 3036_CR79
  doi: 10.1017/S0140525X16001837
– reference: 37041229 - Commun Biol. 2023 Apr 11;6(1):396. doi: 10.1038/s42003-023-04776-4
SSID ssj0001999634
Score 2.611378
Snippet Deep learning algorithms trained to predict masked words from large amount of text have recently been shown to generate activations similar to those of the...
Abstract Deep learning algorithms trained to predict masked words from large amount of text have recently been shown to generate activations similar to those...
SourceID doaj
pubmedcentral
hal
proquest
pubmed
crossref
springer
SourceType Open Website
Open Access Repository
Aggregation Database
Index Database
Enrichment Source
Publisher
StartPage 134
SubjectTerms 631/378/116/1925
631/378/116/2395
631/378/2649/1594
Algorithms
Artificial Intelligence
Biology
Biomedical and Life Sciences
Brain
Brain - diagnostic imaging
Brain - physiology
Brain mapping
Cognitive science
Computation and Language
Computational neuroscience
Computer Science
Deep learning
Functional magnetic resonance imaging
Humans
Language
Learning algorithms
Life Sciences
Machine Learning
Magnetic Resonance Imaging
Magnetoencephalography
Natural Language Processing
Neural networks
Neuroimaging
Neuroscience
SummonAdditionalLinks – databaseName: DOAJ Directory of Open Access Journals
  dbid: DOA
  link: http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1Nb9QwEB2hCiQuiG8CBQXEDazGzsROji2i9FBVHEDqzbIdu7vSkl1ttpX67xk72aWhAi5cbcd2xmO_sT1-A_DeBUPAZBwjNA8MuTKMthGGcVNKMkcsgX5i1z9VZ2f1-Xnz9Uaor-gTNtADD4I7kKE2jjbdXEiF1hmLPlq5qKrKuVYknk-yem5sptLpSrTjSxxfyRRlfdBjcsOKzutFXLYZnyBRIuwnfJlFd8jbtuZtl8nf7k0THB0_hAejHZkfDv1_BHd89xjuDZElr5_Al6MY-qHPTdfmZnGxXM83sx99vor_axaL6zx5m68vfD7v8kTuSZVtzy7z1fB6gBp-Ct-PP3_7dMLGmAnMSSw2jNY9h8FWVimJXgrfGlt4bBCLNghbyxCqIAqCJGG98rZ16NJpVBABDWL5DPa6ZedfQF43BjnVoYyUKFsSaQihlWhoeTcVdxnwrfy0GwnFY1yLhU4X22WtB5lrkrlOMtc8gw-7b1YDncZfSx_FYdmVjFTYKYEURI8Kov-lIBm8o0Gd1HFyeKpjWlGWZJ2UzRW1tL8dcz1O4l4LKZr4SKMqMni7y6bpF-9UTOeXl0OZWkbIz-D5oCK7psqKK7KOMQM1UZ5JX6Y53XyWKL7rSCTIRQYft2r2q1t_ltfL_yGvV3BfpFkiGJf7sLdZX_rXcNddbeb9-k2aZj8BwIsnaQ
  priority: 102
  providerName: Directory of Open Access Journals
Title Brains and algorithms partially converge in natural language processing
URI https://link.springer.com/article/10.1038/s42003-022-03036-1
https://www.ncbi.nlm.nih.gov/pubmed/35173264
https://www.proquest.com/docview/2629167450
https://www.proquest.com/docview/2629860678
https://hal.science/hal-03361439
https://pubmed.ncbi.nlm.nih.gov/PMC8850612
https://doaj.org/article/6f8ac33012674bcab4e30674755ccd23
Volume 5
WOSCitedRecordID wos000756886800004&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVAON
  databaseName: DOAJ Directory of Open Access Journals
  customDbUrl:
  eissn: 2399-3642
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0001999634
  issn: 2399-3642
  databaseCode: DOA
  dateStart: 20180101
  isFulltext: true
  titleUrlDefault: https://www.doaj.org/
  providerName: Directory of Open Access Journals
– providerCode: PRVHPJ
  databaseName: ROAD: Directory of Open Access Scholarly Resources
  customDbUrl:
  eissn: 2399-3642
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0001999634
  issn: 2399-3642
  databaseCode: M~E
  dateStart: 20180101
  isFulltext: true
  titleUrlDefault: https://road.issn.org
  providerName: ISSN International Centre
– providerCode: PRVPQU
  databaseName: Biological Science Database
  customDbUrl:
  eissn: 2399-3642
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0001999634
  issn: 2399-3642
  databaseCode: M7P
  dateStart: 20220101
  isFulltext: true
  titleUrlDefault: http://search.proquest.com/biologicalscijournals
  providerName: ProQuest
– providerCode: PRVPQU
  databaseName: ProQuest Central
  customDbUrl:
  eissn: 2399-3642
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0001999634
  issn: 2399-3642
  databaseCode: BENPR
  dateStart: 20220101
  isFulltext: true
  titleUrlDefault: https://www.proquest.com/central
  providerName: ProQuest
– providerCode: PRVPQU
  databaseName: Publicly Available Content Database
  customDbUrl:
  eissn: 2399-3642
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0001999634
  issn: 2399-3642
  databaseCode: PIMPY
  dateStart: 20220101
  isFulltext: true
  titleUrlDefault: http://search.proquest.com/publiccontent
  providerName: ProQuest
– providerCode: PRVPQU
  databaseName: Science Database
  customDbUrl:
  eissn: 2399-3642
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0001999634
  issn: 2399-3642
  databaseCode: M2P
  dateStart: 20220101
  isFulltext: true
  titleUrlDefault: https://search.proquest.com/sciencejournals
  providerName: ProQuest
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV3db9MwED-xFiRe-P4IjCog3iBaYjt2-oRWtDGkrYoQSOXJchy7rVTS0nST9t9zdpJWZWIvvPghcRwnv_Pd5Xz5HcB7bRUaJqUjtOY2YolQEX5GqChRlKM7UqDR9-z652I8ziaTYd4G3Oo2rbLTiV5Rl0vtYuRHhJOhy5hP40-r35GrGuV2V9sSGgfQd0xlrAf90ck4_7aLsjh_nrL2b5mYZkc18-lYLok9duo7SvYskifuRzszc2mRN33Om6mTf-2ferN0-vB_H-gRPGgd0vC4kaDHcMdUT-BeU6Ly-il8GbkaEnWoqjJUiykOsJn9qsOVkzi1WFyHPm19PTXhvAo9SygO1gVBw1XzGwLO_Bn8OD35_vksaosvRJqzeBOhAtXMFmkhBGeGE1OqIjZsyFhcWlJk3NrUkhhtGymMMEWpmfZhLUssU4zR59CrlpV5CWE2VCzBMYTinPESMbHWlpwptBMqTXQASQeA1C0zuSuQsZB-h5xmsgFNImjSgyaTAD5sr1k1vBy39h45XLc9Hae2P7BcT2W7RCW3mdIUFR5BVAqtCmbc9xQTaap1SWgA71Aq9sY4Oz6X7lhMKbo5dHiFdzrs0JatNqjlDuoA3m5P4zp2mzOqMsvLpk_Gne8QwItGxra3omki0M1mAYg96duby_6Zaj7zXOGZYyRMSAAfOzndTevf7-vV7U_xGu4Tv4BIlPBD6G3Wl-YN3NVXm3m9HsCBmGSDdiUOfJAD2wuSu1Zg28-_XuQ__wAB6zyj
linkProvider ProQuest
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V3Nb9MwFH8aAzQufA8CAwKCE0RLbMdJDghtwOi0Uu0wpN2M49htpZKWphvqP8XfyLOTtAoTu-3A1XYcf_zel_38HsBrZSQKJqkClOYmYFEiAzQjZBBJylEdyVHou-j6_WQwSE9Ps-MN-N2-hbFulS1PdIy6mCp7Rr5LOMmsx3wcfpj9DGzWKHu72qbQqGFxpJe_0GSr3h9-wv19Q8jB55OPvaDJKhAozsJFgJxBMZPHeZJwpjnRhcxDzTLGwsKQPOXGxIaEyLRJrhOdF4opd15jiGGSMYr9XoPrqEaQ1LkKHq_PdKz1QFnzNiek6W7FnPOXdZkPrbAIoo78c2kCUKqNrBPmRQ33oqPmX7e1Tgge3Pnflu8u3G7UbX-vpo97sKHL-3CzTsC5fABf9m2GjMqXZeHLyRAHvBj9qPyZpSc5mSx955Q_H2p_XPouBip21h7x-rP6kQWu1EP4diWz2IbNclrqx-CnmWQR9pFIzhkvEAPGmIIziVJQxpHyIGo3XKgm7rpN_zER7v6fpqIGiUCQCAcSEXnwdvXNrI46cmnrfYujVUsbMdwVTOdD0TAgwU0qFUV2ThAFuZI509ZaZEkcK1UQ6sErRGGnj95eX9iykFJU4mh2jn_aadElGl5XiTW0PHi5qkYuZa-eZKmnZ3WblFvNyINHNaZXv6JxlKARwTxIOmjvjKVbU45HLhJ6auMtRsSDdy1drIf17_V6cvksXsBW7-RrX_QPB0dP4RZxxEuCiO_A5mJ-pp_BDXW-GFfz5476ffh-1fTyBy8Ck20
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Brains+and+algorithms+partially+converge+in+natural+language+processing&rft.jtitle=Communications+biology&rft.au=Caucheteux%2C+Charlotte&rft.au=King%2C+Jean-R%C3%A9mi&rft.date=2022-02-16&rft.eissn=2399-3642&rft.volume=5&rft.issue=1&rft.spage=134&rft_id=info:doi/10.1038%2Fs42003-022-03036-1&rft_id=info%3Apmid%2F35173264&rft.externalDocID=35173264
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2399-3642&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2399-3642&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2399-3642&client=summon