Studying large language models as compression algorithms for human culture

Large language models (LLMs) extract and reproduce the statistical regularities in their training data. Researchers can use these models to study the conceptual relationships encoded in this training data (i.e., the open internet), providing a remarkable opportunity to understand the cultural distin...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Trends in cognitive sciences Ročník 28; číslo 3; s. 187 - 189
Hlavní autor: Buttrick, Nicholas
Médium: Journal Article
Jazyk:angličtina
Vydáno: England Elsevier Ltd 01.03.2024
Témata:
ISSN:1364-6613, 1879-307X, 1879-307X
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Large language models (LLMs) extract and reproduce the statistical regularities in their training data. Researchers can use these models to study the conceptual relationships encoded in this training data (i.e., the open internet), providing a remarkable opportunity to understand the cultural distinctions embedded within much of recorded human communication. Large language models (LLMs) extract and reproduce the statistical regularities in their training data. Researchers can use these models to study the conceptual relationships encoded in this training data (i.e., the open internet), providing a remarkable opportunity to understand the cultural distinctions embedded within much of recorded human communication.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1364-6613
1879-307X
1879-307X
DOI:10.1016/j.tics.2024.01.001