Can ChatGPT Analyze Textual Data? The Case of Conceptual Metaphors in Short Stories of Language Assessment

ChatGPT, a modern artificial intelligence (AI) chatbot, has emerged as an unprecedented breakthrough in multiple domains traditionally dominated by humans. Its ability to engage in human-like conversations has the potential to influence the fields of linguistics and education. The fundamental functi...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of language teaching and research Jg. 16; H. 5; S. 1665 - 1672
Hauptverfasser: Geng, Hui, Wei, Han, Nimehchisalem, Vahid, Azar, Ali Sorayyaei
Format: Journal Article
Sprache:Englisch
Veröffentlicht: London Academy Publication Co., Ltd 01.09.2025
Schlagworte:
ISSN:1798-4769, 2053-0684
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:ChatGPT, a modern artificial intelligence (AI) chatbot, has emerged as an unprecedented breakthrough in multiple domains traditionally dominated by humans. Its ability to engage in human-like conversations has the potential to influence the fields of linguistics and education. The fundamental functions of ChatGPT in teaching and learning have been the subject of some research, but its application in textual analysis has received scant attention. This study aims to investigate how ChatGPT assists in analyzing conceptual metaphors (CMs) in short stories used in language assessment. Based on the Conceptual Metaphor Theory (CMT) by Lakoff and Johnson (1980), the study identified the structural, orientational, and ontological metaphors in 22 short stories from the book Tests and Us 2, first by the cutting-edge AI program ChatGPT (GPT-4), then refined by the researchers and validated by linguistic experts. The results showed a total of 250 conceptual metaphors, including 131 structural metaphors, 64 ontological metaphors, and 55 orientational metaphors. When validated by human specialists, GPT-4 accurately recognized conceptual metaphors in 81.2% of the cases, amounting to 203 instances. The most dominant error made by GPT-4 was classifying nonmetaphoric expressions as metaphoric, followed by providing unclear explanations and classifying metaphoric expressions as non-metaphoric. Errors associated with being too general and too non-literal, unmatched categorization as well as wrong mapping of source or target domain also occurred. Our study shows that ChatGPT, despite its controversial position in academic settings, can be used as a relatively reliable tool in aiding the analysis of textual data.
Bibliographie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1798-4769
2053-0684
DOI:10.17507/1tr.1605.24