Engaging with AI in Crowdsourced Digitization of Ancient Texts: User Perception and Interaction.

Saved in:
Bibliographic Details
Title: Engaging with AI in Crowdsourced Digitization of Ancient Texts: User Perception and Interaction.
Authors: Zhang, Chenyu1 (AUTHOR) zhangchenyu@stu.pku.edu.cn, Li, Wenqi1 (AUTHOR) wenqili@pku.edu.cn, Luo, Zeyang1 (AUTHOR) lzeyang@stu.pku.edu.cn, Zhang, Pengyi1 (AUTHOR) pengyi@pku.edu.cn
Source: Proceedings of the Association for Information Science & Technology. Oct2025, Vol. 62 Issue 1, p1167-1172. 6p.
Subject Terms: *Artificial intelligence, *Digitization, *Cultural property, Trust, Digital humanities
Abstract: This paper explores how users perceive and interact with AI in a crowdsourcing ancient texts digitization project. We conducted semi‐structured interviews with 28 participants from the ancient text digitization crowdsourcing project "I am a Collator of Ancient Texts". Participants viewed AI recognition as accurate and efficient but noted its limits with deteriorated text quality and complex layouts. We found that AI misidentification or failure to provide alternative character suggestions may reduce users' task completion efficiency or even influence their task selection. A key theme emerging from the study is user trust development when interacting with AI—despite initial skepticism, trust in AI technologies gradually increased with positive interaction experiences, leading users to adjust their validation strategies. We also observed that participants acquired knowledge from AI recognition and suggestion results, but could also be misled when AI made errors. This study contributes to understanding human‐AI collaboration in crowdsourced cultural heritage digitization and suggests that future platforms should provide customizable confidence indicators, clearer AI explanations, and learning supports to accommodate diverse users. [ABSTRACT FROM AUTHOR]
Database: Library, Information Science & Technology Abstracts
Description
Abstract:This paper explores how users perceive and interact with AI in a crowdsourcing ancient texts digitization project. We conducted semi‐structured interviews with 28 participants from the ancient text digitization crowdsourcing project "I am a Collator of Ancient Texts". Participants viewed AI recognition as accurate and efficient but noted its limits with deteriorated text quality and complex layouts. We found that AI misidentification or failure to provide alternative character suggestions may reduce users' task completion efficiency or even influence their task selection. A key theme emerging from the study is user trust development when interacting with AI—despite initial skepticism, trust in AI technologies gradually increased with positive interaction experiences, leading users to adjust their validation strategies. We also observed that participants acquired knowledge from AI recognition and suggestion results, but could also be misled when AI made errors. This study contributes to understanding human‐AI collaboration in crowdsourced cultural heritage digitization and suggests that future platforms should provide customizable confidence indicators, clearer AI explanations, and learning supports to accommodate diverse users. [ABSTRACT FROM AUTHOR]
ISSN:23739231
DOI:10.1002/pra2.1352