Preferential Proximal Policy Optimization

The Proximal Policy Optimization (PPO) is a policy gradient approach providing state-of-the-art performance in many domains through the "surrogate" objective function using stochastic gradient ascent. While PPO is an appealing approach in reinforcement learning, it does not consider the im...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Proceedings (IEEE International Conference on Emerging Technologies and Factory Automation) S. 293 - 300
Hauptverfasser: Balasuntharam, Tamilselvan, Davoudi, Heidar, Ebrahimi, Mehran
Format: Tagungsbericht
Sprache:Englisch
Veröffentlicht: IEEE 15.12.2023
Schlagworte:
ISSN:1946-0759
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The Proximal Policy Optimization (PPO) is a policy gradient approach providing state-of-the-art performance in many domains through the "surrogate" objective function using stochastic gradient ascent. While PPO is an appealing approach in reinforcement learning, it does not consider the importance of states (a frequently seen state in a successful trajectory) in policy/value function updates. In this work, we introduce Preferential Proximal Policy Optimization (P3O) which incorporates the importance of these states into parameter updates. First, we determine the importance of each state based on the variance of the action probabilities given a particular state multiplied by the value function, normalized and smoothed using the Exponentially Weighted Moving Average. Then, we incorporate the state's importance in the surrogate objective function. That is, we redefine value and advantage estimation objectives functions in the PPO approach. Unlike other related approaches, we select the importance of states automatically which can be used for any algorithm utilizing a value function. Empirical evaluations across six Atari environments demonstrate that our approach significantly outperforms the baseline (vanilla PPO) across different tested environments, highlighting the value of our proposed method in learning complex environments.
ISSN:1946-0759
DOI:10.1109/ICMLA58977.2023.00048