Adaptive Environment Generation for Continual Learning: Integrating Constraint Logic Programming With Deep Reinforcement Learning

In this article, we introduce a novel framework that combines constraint logic programming (CLP) with deep reinforcement learning (DRL) to create adaptive environments for continual learning. We focus on two challenging domains: Sudoku puzzles and scheduling problems, where environment complexity ev...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on cognitive and developmental systems Jg. 17; H. 3; S. 540 - 553
Hauptverfasser: Boutyour, Youness, Idrissi, Abdellah
Format: Journal Article
Sprache:Englisch
Veröffentlicht: Piscataway IEEE 01.06.2025
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Schlagworte:
ISSN:2379-8920, 2379-8939
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In this article, we introduce a novel framework that combines constraint logic programming (CLP) with deep reinforcement learning (DRL) to create adaptive environments for continual learning. We focus on two challenging domains: Sudoku puzzles and scheduling problems, where environment complexity evolves based on the agent's performance. By integrating CLP, we dynamically adjust problem difficulty in response to the agent's learning trajectory, ensuring a progressively challenging environment that fosters enhanced problem-solving skills. Empirical results across 500 000 episodes show substantial improvements in solve rates, increasing from 6% to 86% for sudoku puzzles and 7% to 79% for scheduling problems, alongside significant reductions in the average steps required to solve each problem. The proposed adaptive environment generation demonstrates the potential of CLP in advancing RL agents' continual learning capabilities by dynamically regulating complexity, thus improving their adaptability and learning efficiency. This framework contributes to the broader fields of reinforcement learning and procedural content generation by introducing an innovative approach to continual adaptation in complex environments.
Bibliographie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2379-8920
2379-8939
DOI:10.1109/TCDS.2024.3485482