Studying large language models as compression algorithms for human culture

Large language models (LLMs) extract and reproduce the statistical regularities in their training data. Researchers can use these models to study the conceptual relationships encoded in this training data (i.e., the open internet), providing a remarkable opportunity to understand the cultural distin...

Full description

Saved in:
Bibliographic Details
Published in:Trends in cognitive sciences Vol. 28; no. 3; pp. 187 - 189
Main Author: Buttrick, Nicholas
Format: Journal Article
Language:English
Published: England Elsevier Ltd 01.03.2024
Subjects:
ISSN:1364-6613, 1879-307X, 1879-307X
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Large language models (LLMs) extract and reproduce the statistical regularities in their training data. Researchers can use these models to study the conceptual relationships encoded in this training data (i.e., the open internet), providing a remarkable opportunity to understand the cultural distinctions embedded within much of recorded human communication. Large language models (LLMs) extract and reproduce the statistical regularities in their training data. Researchers can use these models to study the conceptual relationships encoded in this training data (i.e., the open internet), providing a remarkable opportunity to understand the cultural distinctions embedded within much of recorded human communication.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1364-6613
1879-307X
1879-307X
DOI:10.1016/j.tics.2024.01.001