Managing Paradoxical Tensions in the Implementation of Explainable AI for Product Innovation

This study examines how organizations manage tensions that arise during the implementation of Explainable Artificial Intelligence (XAI) in product innovation. While XAI has advanced technically, its impact on organizational routines, human interpretation of algorithmic outputs, and human-AI dynamics...

Full description

Saved in:
Bibliographic Details
Published in:2025 33rd International Conference on Enabling Technologies: Infrastructure for Collaborative Enterprises (WETICE) pp. 1 - 6
Main Authors: Amarilli, Fabrizio, Uboldi, Sara, Saraceni, Francesca, Tencati, Lorenzo
Format: Conference Proceeding
Language:English
Published: IEEE 23.07.2025
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This study examines how organizations manage tensions that arise during the implementation of Explainable Artificial Intelligence (XAI) in product innovation. While XAI has advanced technically, its impact on organizational routines, human interpretation of algorithmic outputs, and human-AI dynamics remains underexplored. Drawing on a qualitative case study of a global confectionery firm, we analyze the introduction of an XAI solution using participatory action research and paradox theory. We identify four persistent tensions: automation vs. human judgment, transparency vs. complexity, speed vs. accuracy, and standardization vs. customization. Rather than resolving these conflicts, the organization navigated them through both/and strategies that enabled human-AI collaboration. The findings extend paradox theory to XAI-driven innovation and contribute to digital transformation literature by showing how explainability supports knowledge articulation, learning, and adoption. The study also offers practical guidance for designing XAI systems that complement human expertise in complex innovation settings.
DOI:10.1109/WETICE67341.2025.11092074