Studying large language models as compression algorithms for human culture

Large language models (LLMs) extract and reproduce the statistical regularities in their training data. Researchers can use these models to study the conceptual relationships encoded in this training data (i.e., the open internet), providing a remarkable opportunity to understand the cultural distin...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Trends in cognitive sciences Jg. 28; H. 3; S. 187 - 189
1. Verfasser: Buttrick, Nicholas
Format: Journal Article
Sprache:Englisch
Veröffentlicht: England Elsevier Ltd 01.03.2024
Schlagworte:
ISSN:1364-6613, 1879-307X, 1879-307X
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Large language models (LLMs) extract and reproduce the statistical regularities in their training data. Researchers can use these models to study the conceptual relationships encoded in this training data (i.e., the open internet), providing a remarkable opportunity to understand the cultural distinctions embedded within much of recorded human communication. Large language models (LLMs) extract and reproduce the statistical regularities in their training data. Researchers can use these models to study the conceptual relationships encoded in this training data (i.e., the open internet), providing a remarkable opportunity to understand the cultural distinctions embedded within much of recorded human communication.
Bibliographie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1364-6613
1879-307X
1879-307X
DOI:10.1016/j.tics.2024.01.001