Multi‐Agent Reinforcement Learning Framework for Optimizing Smart Cities as System of Systems.

Saved in:
Bibliographic Details
Title: Multi‐Agent Reinforcement Learning Framework for Optimizing Smart Cities as System of Systems.
Authors: Sheikh, Arifuzzaman (Arif)1 (AUTHOR) arif.sheikh@colostate.edu, Chong, Edwin K. P.2 (AUTHOR)
Source: Systems Engineering. Aug2025, p1. 17p. 21 Illustrations.
Subject Terms: *SMART cities, *SYSTEM of systems, *STATISTICAL decision making, *REINFORCEMENT learning, *COMPUTER performance
Abstract: ABSTRACT This paper presents a novel framework for optimizing smart cities as System of Systems (SoS) by integrating Multi‐Agent Reinforcement Learning (MARL) with traditional systems engineering methodologies. Constituent systems–modeled as agents across domains such as transportation, energy, public safety, and communication, operate autonomously under diverse control modes (e.g., Acknowledged, Directed) while aligning with overarching SoS objectives. The proposed framework leverages decentralized policy learning and augmented reward mechanisms to improve coordination, adaptability, and system‐wide efficiency. Simulation results demonstrate a 14.3% increase in system efficiency, a 12.5% improvement in adaptability, and a 25.0% enhancement in coordination effectiveness. These findings underscore the potential of AI‐driven decision‐making to manage emergent behavior and complexity in dynamic, large‐scale urban environments. [ABSTRACT FROM AUTHOR]
Database: Academic Search Index
Description
Abstract:ABSTRACT This paper presents a novel framework for optimizing smart cities as System of Systems (SoS) by integrating Multi‐Agent Reinforcement Learning (MARL) with traditional systems engineering methodologies. Constituent systems–modeled as agents across domains such as transportation, energy, public safety, and communication, operate autonomously under diverse control modes (e.g., Acknowledged, Directed) while aligning with overarching SoS objectives. The proposed framework leverages decentralized policy learning and augmented reward mechanisms to improve coordination, adaptability, and system‐wide efficiency. Simulation results demonstrate a 14.3% increase in system efficiency, a 12.5% improvement in adaptability, and a 25.0% enhancement in coordination effectiveness. These findings underscore the potential of AI‐driven decision‐making to manage emergent behavior and complexity in dynamic, large‐scale urban environments. [ABSTRACT FROM AUTHOR]
ISSN:10981241
DOI:10.1002/sys.70006