AI algorithm transparency, pipelines for trust not prisms: mitigating general negative attitudes and enhancing trust toward AI

This study explores artificial intelligence (AI) algorithm transparency to mitigate negative attitudes and to enhance trust in AI systems and the companies that use them. Given the growing importance of generative AI such as ChatGPT in stakeholder communications, our research aims to understand how...

Full description

Saved in:
Bibliographic Details
Published in:Humanities & social sciences communications Vol. 12; no. 1; pp. 1160 - 13
Main Authors: Park, Keonyoung, Yoon, Ho Young
Format: Journal Article
Language:English
Published: London Palgrave Macmillan UK 23.07.2025
Springer Nature B.V
Springer Nature
Subjects:
ISSN:2662-9992, 2662-9992
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This study explores artificial intelligence (AI) algorithm transparency to mitigate negative attitudes and to enhance trust in AI systems and the companies that use them. Given the growing importance of generative AI such as ChatGPT in stakeholder communications, our research aims to understand how transparency can influence trust dynamics. Particularly, we propose a shift from a reputation-focused prism model to a knowledge-centric pipeline model of AI trust, emphasizing transparency as a strategic tool to reduce uncertainty and enhance knowledge. To investigate these, we conducted an online experiment using a 2 (AI algorithm transparency: High vs. Low) by 2 (Issue involvement: High vs. Low) between-subjects design. The results indicated that AI algorithm transparency significantly mitigates the negative relationship between a general negative attitude toward AI and trust in the parent company, particularly when issue involvement was high. This suggests that transparency serves as an essential signal of trustworthiness and is capable of reducing skepticism even among those predisposed to distrust AI as a technical feature and a communicative strategy. Our findings extend prior literature by demonstrating that transparency not only fosters understanding but also acts as a signaling mechanism for organizational accountability. This has practical implications for organizations integrating AI, offering a viable strategy to cultivate trust. By highlighting transparency’s role in trust-building, this research underscores its potential to enhance stakeholder confidence in AI systems and support ethical AI integration across diverse contexts.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2662-9992
2662-9992
DOI:10.1057/s41599-025-05116-z