DSAC: Distributional Soft Actor-Critic for Risk-Sensitive Reinforcement Learning.

Uložené v:
Podrobná bibliografia
Názov: DSAC: Distributional Soft Actor-Critic for Risk-Sensitive Reinforcement Learning.
Autori: MA, XIAOTENG, CHEN, JUNYAO, XIA, LI, YANG, JUN, ZHAO, QIANCHUAN, ZHOU, ZHENGYUAN
Zdroj: Journal of Artificial Intelligence Research; 2025, Vol. 83, p1-28, 28p
Predmety: REINFORCEMENT learning, ENTROPY, ALGORITHMS
Abstrakt: We present Distributional Soft Actor-Critic (DSAC), a distributional reinforcement learning (RL) algorithm that combines the strengths of distributional information of accumulated rewards and entropy-driven exploration from Soft Actor-Critic (SAC) algorithm. DSAC models the randomness in both action and rewards, surpassing baseline performances on various continuous control tasks. Unlike standard approaches that solely maximize expected rewards, we propose a unified framework for risk-sensitive learning, one that optimizes the risk-related objective while balancing entropy to encourage exploration. Extensive experiments demonstrate DSAC's effectiveness in enhancing agent performances for both risk-neutral and risk-sensitive control tasks. [ABSTRACT FROM AUTHOR]
Copyright of Journal of Artificial Intelligence Research is the property of AI Access Foundation and its content may not be copied or emailed to multiple sites without the copyright holder's express written permission. Additionally, content may not be used with any artificial intelligence tools or machine learning technologies. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
Databáza: Complementary Index
Popis
Abstrakt:We present Distributional Soft Actor-Critic (DSAC), a distributional reinforcement learning (RL) algorithm that combines the strengths of distributional information of accumulated rewards and entropy-driven exploration from Soft Actor-Critic (SAC) algorithm. DSAC models the randomness in both action and rewards, surpassing baseline performances on various continuous control tasks. Unlike standard approaches that solely maximize expected rewards, we propose a unified framework for risk-sensitive learning, one that optimizes the risk-related objective while balancing entropy to encourage exploration. Extensive experiments demonstrate DSAC's effectiveness in enhancing agent performances for both risk-neutral and risk-sensitive control tasks. [ABSTRACT FROM AUTHOR]
ISSN:10769757
DOI:10.1613/jair.1.17526