A Model-Free Deep Reinforcement Learning Algorithm for Solving Multi-Agent Nash Equilibrium With Unstable Communication
Most reinforcement learning (RL) algorithms proposed to solve Nash equilibrium in multi-agent systems assume stable communication conditions or rely on accurate models of the environment. However, these assumptions are often unrealistic in practical applications since communication is not always sta...
Gespeichert in:
| Veröffentlicht in: | IEEE access Jg. 13; S. 43973 - 43980 |
|---|---|
| Hauptverfasser: | , , |
| Format: | Journal Article |
| Sprache: | Englisch |
| Veröffentlicht: |
Piscataway
IEEE
2025
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Schlagworte: | |
| ISSN: | 2169-3536, 2169-3536 |
| Online-Zugang: | Volltext |
| Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
| Zusammenfassung: | Most reinforcement learning (RL) algorithms proposed to solve Nash equilibrium in multi-agent systems assume stable communication conditions or rely on accurate models of the environment. However, these assumptions are often unrealistic in practical applications since communication is not always stable, and obtaining precise models for dynamic environments in real-time is challenging. To address these issues, this paper proposes a model-free RL algorithm based on the deep deterministic policy gradient (DDPG) algorithm. The proposed method is designed to handle communication instability and the resulting variability in information exchange between agents. Analytical and numerical results demonstrate that the proposed algorithm does not require storing data from neighboring agents. It also achieves better adaptability and convergence under unstable communication conditions compared to existing methods. |
|---|---|
| Bibliographie: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ISSN: | 2169-3536 2169-3536 |
| DOI: | 10.1109/ACCESS.2025.3549276 |