Beyond human-in-the-loop: Sensemaking between artificial intelligence and human intelligence collaboration
Uloženo v:
| Název: | Beyond human-in-the-loop: Sensemaking between artificial intelligence and human intelligence collaboration |
|---|---|
| Autoři: | Xinyue Hao, Emrah Demir, Daniel Eyers |
| Zdroj: | Sustainable Futures, Vol 10, Iss , Pp 101152- (2025) |
| Informace o vydavateli: | Elsevier, 2025. |
| Rok vydání: | 2025 |
| Sbírka: | LCC:Environmental sciences LCC:Technology |
| Témata: | AI-human collaboration, Decision-making, Operations and supply chain management (OSCM), Sociotechnical systems, Cognitive mapping, Environmental sciences, GE1-350, Technology |
| Popis: | In contemporary operational environments, decision-making is increasingly shaped by the interaction between intuitive, fast-acting System 1 processes and slow, analytical System 2 reasoning. Human intelligence (HI) navigates fluidly between these cognitive modes, enabling adaptive responses to both structured and ambiguous situations. In parallel, artificial intelligence (AI) has rapidly evolved to support tasks typically associated with System 2 reasoning, such as optimization, forecasting, and rule-based analysis, with speed and precision that in certain structured contexts can exceed human capabilities. To investigate how AI and HI collaborate in practice, we conducted 28 in-depth interviews across 9 leading firms recognized as benchmarks in AI adoption within operations and supply chain management (OSCM). These interviews targeted key HI agents, operations managers, data scientists, and algorithm engineers, and were situated within carefully selected, AI-rich scenarios. Using a sensemaking framework and cognitive mapping methodology, we explored how HI interpret and interact with AI across pre-development, deployment, and post-development phases. Our findings reveal that collaboration is a dynamic and co-constitutive process of institutional co-production, structured by epistemic asymmetry, symbolic accountability, and infrastructural interdependence. While AI contributes speed, scale, and pattern recognition in routine, structured environments, human actors provide ethical oversight, contextual judgment, and strategic interpretation, particularly vital in uncertain or ethically charged contexts. Moving beyond static models such as “human-in-the-loop” or “AI-assistance,” this study offers a novel framework that conceptualizes AI and HI collaboration as a sociotechnical system. Theoretically, it bridges fragmented literatures in AI, cognitive science, and institutional theory. Practically, it offers actionable insights for designing collaborative infrastructures that are both ethically aligned and organizationally resilient. As AI ecosystems grow more complex and decentralized, our findings highlight the need for reflexive governance mechanisms to support adaptive, interpretable, and accountable human–machine decision-making. |
| Druh dokumentu: | article |
| Popis souboru: | electronic resource |
| Jazyk: | English |
| ISSN: | 2666-1888 |
| Relation: | http://www.sciencedirect.com/science/article/pii/S2666188825007166; https://doaj.org/toc/2666-1888 |
| DOI: | 10.1016/j.sftr.2025.101152 |
| Přístupová URL adresa: | https://doaj.org/article/e2804288abf64590b1a02075c3c308e3 |
| Přístupové číslo: | edsdoj.2804288abf64590b1a02075c3c308e3 |
| Databáze: | Directory of Open Access Journals |
| Abstrakt: | In contemporary operational environments, decision-making is increasingly shaped by the interaction between intuitive, fast-acting System 1 processes and slow, analytical System 2 reasoning. Human intelligence (HI) navigates fluidly between these cognitive modes, enabling adaptive responses to both structured and ambiguous situations. In parallel, artificial intelligence (AI) has rapidly evolved to support tasks typically associated with System 2 reasoning, such as optimization, forecasting, and rule-based analysis, with speed and precision that in certain structured contexts can exceed human capabilities. To investigate how AI and HI collaborate in practice, we conducted 28 in-depth interviews across 9 leading firms recognized as benchmarks in AI adoption within operations and supply chain management (OSCM). These interviews targeted key HI agents, operations managers, data scientists, and algorithm engineers, and were situated within carefully selected, AI-rich scenarios. Using a sensemaking framework and cognitive mapping methodology, we explored how HI interpret and interact with AI across pre-development, deployment, and post-development phases. Our findings reveal that collaboration is a dynamic and co-constitutive process of institutional co-production, structured by epistemic asymmetry, symbolic accountability, and infrastructural interdependence. While AI contributes speed, scale, and pattern recognition in routine, structured environments, human actors provide ethical oversight, contextual judgment, and strategic interpretation, particularly vital in uncertain or ethically charged contexts. Moving beyond static models such as “human-in-the-loop” or “AI-assistance,” this study offers a novel framework that conceptualizes AI and HI collaboration as a sociotechnical system. Theoretically, it bridges fragmented literatures in AI, cognitive science, and institutional theory. Practically, it offers actionable insights for designing collaborative infrastructures that are both ethically aligned and organizationally resilient. As AI ecosystems grow more complex and decentralized, our findings highlight the need for reflexive governance mechanisms to support adaptive, interpretable, and accountable human–machine decision-making. |
|---|---|
| ISSN: | 26661888 |
| DOI: | 10.1016/j.sftr.2025.101152 |
Full Text Finder
Nájsť tento článok vo Web of Science