Facilitating Trustworthy Human-Agent Collaboration in LLM-based Multi-Agent System oriented Software Engineering

Gespeichert in:
Bibliographische Detailangaben
Titel: Facilitating Trustworthy Human-Agent Collaboration in LLM-based Multi-Agent System oriented Software Engineering
Autoren: Ronanki, Krishna, 1997
Quelle: 33rd ACM International Conference on the Foundations of Software Engineering, FSE Companion 2025, Trondheim, Norway Proceedings of the ACM SIGSOFT Symposium on the Foundations of Software Engineering. :1333-1337
Schlagwörter: Human-Agent Collaboration, LLM-Based Multi-Agent Systems, DevOps, Software Engineering, Large Language Models, Trustworthy AI
Beschreibung: Multi-agent autonomous systems (MAS) are better at addressing challenges that spans across multiple domains than singular autonomous agents. This holds true within the field of software engineering (SE) as well. The state-of-the-art research on MAS within SE focuses on integrating LLMs at the core of autonomous agents to create LLM-based multi-agent autonomous (LMA) systems. However, the introduction of LMA systems into SE brings a plethora of challenges. One of the major challenges is the strategic allocation of tasks between humans and the LMA system in a trustworthy manner. To address this challenge, a RACI-based framework is proposed in this work in progress article, along with implementation guidelines and an example implementation of the framework. The proposed framework can facilitate efficient collaboration, ensure accountability, and mitigate potential risks associated with LLM-driven automation while aligning with the Trustworthy AI guidelines. The future steps for this work delineating the planned empirical validation method are also presented.
Dateibeschreibung: electronic
Zugangs-URL: https://research.chalmers.se/publication/548045
https://research.chalmers.se/publication/548045/file/548045_Fulltext.pdf
Datenbank: SwePub
Beschreibung
Abstract:Multi-agent autonomous systems (MAS) are better at addressing challenges that spans across multiple domains than singular autonomous agents. This holds true within the field of software engineering (SE) as well. The state-of-the-art research on MAS within SE focuses on integrating LLMs at the core of autonomous agents to create LLM-based multi-agent autonomous (LMA) systems. However, the introduction of LMA systems into SE brings a plethora of challenges. One of the major challenges is the strategic allocation of tasks between humans and the LMA system in a trustworthy manner. To address this challenge, a RACI-based framework is proposed in this work in progress article, along with implementation guidelines and an example implementation of the framework. The proposed framework can facilitate efficient collaboration, ensure accountability, and mitigate potential risks associated with LLM-driven automation while aligning with the Trustworthy AI guidelines. The future steps for this work delineating the planned empirical validation method are also presented.
ISSN:15397521
DOI:10.1145/3696630.3728717