Boosting Exploration in Reinforcement Learning Agents via Path-Based Knowledge Graph Reasoning

This paper proposes and presents a novel methodology for enhancing the exploration process in reinforcement learning algorithms, based on applying an optimal path search algorithm on a knowledge graph. Traditional approaches in the exploration phase used in agent-based models often rely on random an...

Full description

Saved in:
Bibliographic Details
Published in:Lobachevskii journal of mathematics Vol. 46; no. 5; pp. 2415 - 2429
Main Authors: Pismerov, A. M., Mouromtsev, D. I.
Format: Journal Article
Language:English
Published: Moscow Pleiades Publishing 01.05.2025
Springer Nature B.V
Subjects:
ISSN:1995-0802, 1818-9962
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This paper proposes and presents a novel methodology for enhancing the exploration process in reinforcement learning algorithms, based on applying an optimal path search algorithm on a knowledge graph. Traditional approaches in the exploration phase used in agent-based models often rely on random and probabilistic strategies, which may prove inefficient in complex and dynamic environments. This work introduces an alternative approach that leverages structured information from a knowledge graph to identify and select the most promising actions. The methodology includes a path-based reasoning module that uses the knowledge graph to determine suitable action directions for the agent. Experimental results indicate that the proposed method improves agent performance in complex, dynamic environments with non-deterministic action sets, demonstrating superior results in tasks with complex knowledge structures and high adaptation requirements.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1995-0802
1818-9962
DOI:10.1134/S1995080224607896