Deep neural networks and humans both benefit from compositional language structure

Uloženo v:
Podrobná bibliografie
Název: Deep neural networks and humans both benefit from compositional language structure
Autoři: Lukas Galke, Yoav Ram, Limor Raviv
Zdroj: Nat Commun
Nature Communications, Vol 15, Iss 1, Pp 1-13 (2024)
Informace o vydavateli: Springer Science and Business Media LLC, 2024.
Rok vydání: 2024
Témata: 0301 basic medicine, 0303 health sciences, Neural Networks, Science, Learning/physiology, Linguistics, Article, Computer, 03 medical and health sciences, Deep Learning, Humans, Learning, Neural Networks, Computer, Language, Natural Language Processing
Popis: Deep neural networks drive the success of natural language processing. A fundamental property of language is its compositional structure, allowing humans to systematically produce forms for new meanings. For humans, languages with more compositional and transparent structures are typically easier to learn than those with opaque and irregular structures. However, this learnability advantage has not yet been shown for deep neural networks, limiting their use as models for human language learning. Here, we directly test how neural networks compare to humans in learning and generalizing different languages that vary in their degree of compositional structure. We evaluate the memorization and generalization capabilities of a large language model and recurrent neural networks, and show that both deep neural networks exhibit a learnability advantage for more structured linguistic input: neural networks exposed to more compositional languages show more systematic generalization, greater agreement between different agents, and greater similarity to human learners.
Druh dokumentu: Article
Other literature type
Jazyk: English
ISSN: 2041-1723
DOI: 10.1038/s41467-024-55158-1
Přístupová URL adresa: https://pubmed.ncbi.nlm.nih.gov/39738033
https://doaj.org/article/fae43946b97447cf9d3969e4a7ae2cc3
Rights: CC BY
Přístupové číslo: edsair.doi.dedup.....b1358f95b001f4e5d36bb7a1a1ef779e
Databáze: OpenAIRE
Popis
Abstrakt:Deep neural networks drive the success of natural language processing. A fundamental property of language is its compositional structure, allowing humans to systematically produce forms for new meanings. For humans, languages with more compositional and transparent structures are typically easier to learn than those with opaque and irregular structures. However, this learnability advantage has not yet been shown for deep neural networks, limiting their use as models for human language learning. Here, we directly test how neural networks compare to humans in learning and generalizing different languages that vary in their degree of compositional structure. We evaluate the memorization and generalization capabilities of a large language model and recurrent neural networks, and show that both deep neural networks exhibit a learnability advantage for more structured linguistic input: neural networks exposed to more compositional languages show more systematic generalization, greater agreement between different agents, and greater similarity to human learners.
ISSN:20411723
DOI:10.1038/s41467-024-55158-1