Deep deterministic policy gradient algorithm based on dung beetle optimization and priority experience replay mechanism

Reinforcement learning algorithms that handle continuous action spaces have the problem of slow convergence and local optimality. Hence, we propose a deep deterministic policy gradient algorithm based on the dung beetle optimization algorithm (DBOP–DDPG) and priority experience replay mechanism. Thi...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Scientific reports Jg. 15; H. 1; S. 13978 - 14
Hauptverfasser: Zhu, Hengwei, Rong, Chuiting, Liu, Haorui
Format: Journal Article
Sprache:Englisch
Veröffentlicht: London Nature Publishing Group UK 22.04.2025
Nature Portfolio
Schlagworte:
ISSN:2045-2322, 2045-2322
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Reinforcement learning algorithms that handle continuous action spaces have the problem of slow convergence and local optimality. Hence, we propose a deep deterministic policy gradient algorithm based on the dung beetle optimization algorithm (DBOP–DDPG) and priority experience replay mechanism. This method first adopts the simultaneous search policy of multiple populations by introducing the dung beetle optimizer (DBO), which can effectively keep the algorithm from falling into the local optimum solution and improve global optimization capability. Then, we design a criterion for determining the priority of sample data. The experience replay mechanism sampling is improved, and sample data in the experience replay mechanism are stored in three replay mechanisms based on importance for subsequent sampling training to then improve the algorithm’s convergence speed. Finally, tests were conducted in three classic control environments of OpenAI Gym. The results showed that the improved method improved the convergence speed by at least 10% compared with the comparison algorithm, and the cumulative reward value was increased by up to 150.
Bibliographie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:2045-2322
2045-2322
DOI:10.1038/s41598-025-99213-3