AI Chatbots and Subject Cataloging: A Performance Test

Libraries show an increasing interest in incorporating AI tools into their workflows, particularly easily accessible and free-to-use chatbots. However, empirical evidence is limited regarding the effectiveness of these tools to perform traditionally time-consuming subject cataloging tasks. In this s...

Full description

Saved in:
Bibliographic Details
Published in:Library resources & technical services Vol. 69; no. 2
Main Authors: Dobreski, Brian, Hastings, Christopher
Format: Journal Article
Language:English
Published: American Library Association 01.04.2025
Subjects:
ISSN:0024-2527
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Libraries show an increasing interest in incorporating AI tools into their workflows, particularly easily accessible and free-to-use chatbots. However, empirical evidence is limited regarding the effectiveness of these tools to perform traditionally time-consuming subject cataloging tasks. In this study, researchers sought to assess the performance of AI tools in performing basic subject heading and classification number assignment. Using a well-established instructional cataloging text as a basis, researchers developed and administered a test designed to evaluate the effectiveness of three chatbots (ChatGPT, Gemini, Copilot) in assigning Dewey Decimal Classification, Library of Congress Classification, and Library of Congress Subject Heading terms and numbers. The quantity and quality of errors in chatbot responses were analyzed. Overall performance of these tools was poor, particularly for assigning classification numbers. Frequent sources of error included assigning overly broad numbers or numbers for incorrect topics. Although subject heading assignment was also poor, ChatGPT showed more promise here, backing up previous observations that chatbots may hold more immediate potential for this task. Although AI chatbots do not show promise in reducing time and effort associated with subject cataloging at this time, this may change in the future. For now, findings from this study offer caveats for catalogers already working with these tools and underscore the continuing importance of human expertise and oversight in cataloging.
ISSN:0024-2527
DOI:10.5860/lrts.69n1