Research on robot path planning and obstacle avoidance algorithm in dynamic environment based on deep reinforcement learning

Uložené v:
Podrobná bibliografia
Názov: Research on robot path planning and obstacle avoidance algorithm in dynamic environment based on deep reinforcement learning
Autori: Zhaolin Liu
Zdroj: Applied and Computational Engineering. 103:86-93
Informácie o vydavateľovi: EWA Publishing, 2024.
Rok vydania: 2024
Popis: In dynamic environments, robot path planning and obstacle avoidance are critical tasks, especially in applications such as autonomous driving, industrial automation, and mobile robotics. These tasks are inherently challenging due to the unpredictability of the environment and the need for real-time decision-making. This paper seeks to address these challenges by developing and analyzing both traditional and optimized models for robot navigation. The initial model utilizes a basic Q-learning algorithm, which provides a straightforward approach to learning from the environment but often struggles with the complexity of dynamic scenarios. To this end, an optimized model is developed that combines the Double Deep Q-Learning algorithm (Double DQN) in conjunction with heuristic strategies. The research employs the MATLAB Reinforcement Learning Toolbox to implement and train these models, and utilizes a simulated environment with dynamic obstacles as a testing site. The simulation generates the necessary data to allow for comprehensive testing and evaluation of the models performance. The results show that the optimized model greatly exceeds the initial model in terms of path planning efficiency and obstacle avoidance capabilities, and that the combination of advanced reinforcement learning techniques and heuristic strategies is extremely important for enhancing the performance and reliability of robotic systems in complex, dynamic environments, offering valuable insights for future applications in various fields of robotics.
Druh dokumentu: Article
ISSN: 2755-273X
2755-2721
DOI: 10.54254/2755-2721/103/20241048
Prístupové číslo: edsair.doi...........0d9fcc95fe2ecdd720a74e2d1a2a3e71
Databáza: OpenAIRE
Popis
Abstrakt:In dynamic environments, robot path planning and obstacle avoidance are critical tasks, especially in applications such as autonomous driving, industrial automation, and mobile robotics. These tasks are inherently challenging due to the unpredictability of the environment and the need for real-time decision-making. This paper seeks to address these challenges by developing and analyzing both traditional and optimized models for robot navigation. The initial model utilizes a basic Q-learning algorithm, which provides a straightforward approach to learning from the environment but often struggles with the complexity of dynamic scenarios. To this end, an optimized model is developed that combines the Double Deep Q-Learning algorithm (Double DQN) in conjunction with heuristic strategies. The research employs the MATLAB Reinforcement Learning Toolbox to implement and train these models, and utilizes a simulated environment with dynamic obstacles as a testing site. The simulation generates the necessary data to allow for comprehensive testing and evaluation of the models performance. The results show that the optimized model greatly exceeds the initial model in terms of path planning efficiency and obstacle avoidance capabilities, and that the combination of advanced reinforcement learning techniques and heuristic strategies is extremely important for enhancing the performance and reliability of robotic systems in complex, dynamic environments, offering valuable insights for future applications in various fields of robotics.
ISSN:2755273X
27552721
DOI:10.54254/2755-2721/103/20241048