Managing Paradoxical Tensions in the Implementation of Explainable AI for Product Innovation
This study examines how organizations manage tensions that arise during the implementation of Explainable Artificial Intelligence (XAI) in product innovation. While XAI has advanced technically, its impact on organizational routines, human interpretation of algorithmic outputs, and human-AI dynamics...
Uloženo v:
| Vydáno v: | 2025 33rd International Conference on Enabling Technologies: Infrastructure for Collaborative Enterprises (WETICE) s. 1 - 6 |
|---|---|
| Hlavní autoři: | , , , |
| Médium: | Konferenční příspěvek |
| Jazyk: | angličtina |
| Vydáno: |
IEEE
23.07.2025
|
| Témata: | |
| On-line přístup: | Získat plný text |
| Tagy: |
Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
|
| Shrnutí: | This study examines how organizations manage tensions that arise during the implementation of Explainable Artificial Intelligence (XAI) in product innovation. While XAI has advanced technically, its impact on organizational routines, human interpretation of algorithmic outputs, and human-AI dynamics remains underexplored. Drawing on a qualitative case study of a global confectionery firm, we analyze the introduction of an XAI solution using participatory action research and paradox theory. We identify four persistent tensions: automation vs. human judgment, transparency vs. complexity, speed vs. accuracy, and standardization vs. customization. Rather than resolving these conflicts, the organization navigated them through both/and strategies that enabled human-AI collaboration. The findings extend paradox theory to XAI-driven innovation and contribute to digital transformation literature by showing how explainability supports knowledge articulation, learning, and adoption. The study also offers practical guidance for designing XAI systems that complement human expertise in complex innovation settings. |
|---|---|
| DOI: | 10.1109/WETICE67341.2025.11092074 |