Preferential Proximal Policy Optimization

The Proximal Policy Optimization (PPO) is a policy gradient approach providing state-of-the-art performance in many domains through the "surrogate" objective function using stochastic gradient ascent. While PPO is an appealing approach in reinforcement learning, it does not consider the im...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Proceedings (IEEE International Conference on Emerging Technologies and Factory Automation) s. 293 - 300
Hlavní autoři: Balasuntharam, Tamilselvan, Davoudi, Heidar, Ebrahimi, Mehran
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: IEEE 15.12.2023
Témata:
ISSN:1946-0759
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:The Proximal Policy Optimization (PPO) is a policy gradient approach providing state-of-the-art performance in many domains through the "surrogate" objective function using stochastic gradient ascent. While PPO is an appealing approach in reinforcement learning, it does not consider the importance of states (a frequently seen state in a successful trajectory) in policy/value function updates. In this work, we introduce Preferential Proximal Policy Optimization (P3O) which incorporates the importance of these states into parameter updates. First, we determine the importance of each state based on the variance of the action probabilities given a particular state multiplied by the value function, normalized and smoothed using the Exponentially Weighted Moving Average. Then, we incorporate the state's importance in the surrogate objective function. That is, we redefine value and advantage estimation objectives functions in the PPO approach. Unlike other related approaches, we select the importance of states automatically which can be used for any algorithm utilizing a value function. Empirical evaluations across six Atari environments demonstrate that our approach significantly outperforms the baseline (vanilla PPO) across different tested environments, highlighting the value of our proposed method in learning complex environments.
ISSN:1946-0759
DOI:10.1109/ICMLA58977.2023.00048