AI Chatbots and Subject Cataloging: A Performance Test

Libraries show an increasing interest in incorporating AI tools into their workflows, particularly easily accessible and free-to-use chatbots. However, empirical evidence is limited regarding the effectiveness of these tools to perform traditionally time-consuming subject cataloging tasks. In this s...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Library resources & technical services Jg. 69; H. 2
Hauptverfasser: Dobreski, Brian, Hastings, Christopher
Format: Journal Article
Sprache:Englisch
Veröffentlicht: American Library Association 01.04.2025
Schlagworte:
ISSN:0024-2527
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Libraries show an increasing interest in incorporating AI tools into their workflows, particularly easily accessible and free-to-use chatbots. However, empirical evidence is limited regarding the effectiveness of these tools to perform traditionally time-consuming subject cataloging tasks. In this study, researchers sought to assess the performance of AI tools in performing basic subject heading and classification number assignment. Using a well-established instructional cataloging text as a basis, researchers developed and administered a test designed to evaluate the effectiveness of three chatbots (ChatGPT, Gemini, Copilot) in assigning Dewey Decimal Classification, Library of Congress Classification, and Library of Congress Subject Heading terms and numbers. The quantity and quality of errors in chatbot responses were analyzed. Overall performance of these tools was poor, particularly for assigning classification numbers. Frequent sources of error included assigning overly broad numbers or numbers for incorrect topics. Although subject heading assignment was also poor, ChatGPT showed more promise here, backing up previous observations that chatbots may hold more immediate potential for this task. Although AI chatbots do not show promise in reducing time and effort associated with subject cataloging at this time, this may change in the future. For now, findings from this study offer caveats for catalogers already working with these tools and underscore the continuing importance of human expertise and oversight in cataloging.
ISSN:0024-2527
DOI:10.5860/lrts.69n1