A Model-Free Deep Reinforcement Learning Algorithm for Solving Multi-Agent Nash Equilibrium With Unstable Communication

Most reinforcement learning (RL) algorithms proposed to solve Nash equilibrium in multi-agent systems assume stable communication conditions or rely on accurate models of the environment. However, these assumptions are often unrealistic in practical applications since communication is not always sta...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE access Ročník 13; s. 43973 - 43980
Hlavní autoři: Jiang, Yuannan, Jiang, Shengming, Wang, Xiaofeng
Médium: Journal Article
Jazyk:angličtina
Vydáno: Piscataway IEEE 2025
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Témata:
ISSN:2169-3536, 2169-3536
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Most reinforcement learning (RL) algorithms proposed to solve Nash equilibrium in multi-agent systems assume stable communication conditions or rely on accurate models of the environment. However, these assumptions are often unrealistic in practical applications since communication is not always stable, and obtaining precise models for dynamic environments in real-time is challenging. To address these issues, this paper proposes a model-free RL algorithm based on the deep deterministic policy gradient (DDPG) algorithm. The proposed method is designed to handle communication instability and the resulting variability in information exchange between agents. Analytical and numerical results demonstrate that the proposed algorithm does not require storing data from neighboring agents. It also achieves better adaptability and convergence under unstable communication conditions compared to existing methods.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2025.3549276