Meta-Reinforcement Learning in Non-Stationary and Dynamic Environments

In recent years, the subject of deep reinforcement learning (DRL) has developed very rapidly, and is now applied in various fields, such as decision making and control tasks. However, artificial agents trained with RL algorithms require great amounts of training data, unlike humans that are able to...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:IEEE transactions on pattern analysis and machine intelligence Ročník 45; číslo 3; s. 3476 - 3491
Hlavní autori: Bing, Zhenshan, Lerch, David, Huang, Kai, Knoll, Alois
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: United States IEEE 01.03.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Predmet:
ISSN:0162-8828, 1939-3539, 2160-9292, 1939-3539
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:In recent years, the subject of deep reinforcement learning (DRL) has developed very rapidly, and is now applied in various fields, such as decision making and control tasks. However, artificial agents trained with RL algorithms require great amounts of training data, unlike humans that are able to learn new skills from very few examples. The concept of meta-reinforcement learning (meta-RL) has been recently proposed to enable agents to learn similar but new skills from a small amount of experience by leveraging a set of tasks with a shared structure. Due to the task representation learning strategy with few-shot adaptation, most recent work is limited to narrow task distributions and stationary environments, where tasks do not change within episodes. In this work, we address those limitations and introduce a training strategy that is applicable to non-stationary environments, as well as a task representation based on Gaussian mixture models to model clustered task distributions. We evaluate our method on several continuous robotic control benchmarks. Compared with state-of-the-art literature that is only applicable to stationary environments with few-shot adaption, our algorithm first achieves competitive asymptotic performance and superior sample efficiency in stationary environments with zero-shot adaption. Second, our algorithm learns to perform successfully in non-stationary settings as well as a continual learning setting, while learning well-structured task representations. Last, our algorithm learns basic distinct behaviors and well-structured task representations in task distributions with multiple qualitatively distinct tasks.
Bibliografia:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:0162-8828
1939-3539
2160-9292
1939-3539
DOI:10.1109/TPAMI.2022.3185549