Managing Paradoxical Tensions in the Implementation of Explainable AI for Product Innovation

This study examines how organizations manage tensions that arise during the implementation of Explainable Artificial Intelligence (XAI) in product innovation. While XAI has advanced technically, its impact on organizational routines, human interpretation of algorithmic outputs, and human-AI dynamics...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:2025 33rd International Conference on Enabling Technologies: Infrastructure for Collaborative Enterprises (WETICE) s. 1 - 6
Hlavní autori: Amarilli, Fabrizio, Uboldi, Sara, Saraceni, Francesca, Tencati, Lorenzo
Médium: Konferenčný príspevok..
Jazyk:English
Vydavateľské údaje: IEEE 23.07.2025
Predmet:
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:This study examines how organizations manage tensions that arise during the implementation of Explainable Artificial Intelligence (XAI) in product innovation. While XAI has advanced technically, its impact on organizational routines, human interpretation of algorithmic outputs, and human-AI dynamics remains underexplored. Drawing on a qualitative case study of a global confectionery firm, we analyze the introduction of an XAI solution using participatory action research and paradox theory. We identify four persistent tensions: automation vs. human judgment, transparency vs. complexity, speed vs. accuracy, and standardization vs. customization. Rather than resolving these conflicts, the organization navigated them through both/and strategies that enabled human-AI collaboration. The findings extend paradox theory to XAI-driven innovation and contribute to digital transformation literature by showing how explainability supports knowledge articulation, learning, and adoption. The study also offers practical guidance for designing XAI systems that complement human expertise in complex innovation settings.
DOI:10.1109/WETICE67341.2025.11092074