Sensor-Based Mobile Robot Navigation via Deep Reinforcement Learning

Navigation tasks for mobile robots have been widely studied over past several years. More recently, there have been many attempts to introduce the usage of machine learning algorithms. Deep learning techniques are of special importance because they have achieved excellent performance in various fiel...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:International Conference on Big Data and Smart Computing s. 147 - 154
Hlavní autoři: Han, Seung-Ho, Choi, Ho-Jin, Benz, Philipp, Loaiciga, Jorge
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: IEEE 01.01.2018
Témata:
ISSN:2375-9356
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Navigation tasks for mobile robots have been widely studied over past several years. More recently, there have been many attempts to introduce the usage of machine learning algorithms. Deep learning techniques are of special importance because they have achieved excellent performance in various fields, including robot navigation. Deep learning methods, however, require considerable amount of data for training deep learning models and their results may be difficult to interpret for researchers. To address this issue, we propose a novel model for mobile robot navigation using deep reinforcement learning. In our navigation tasks, no information about the environment is given to the robot beforehand. Additionally, the positions of obstacles and goal change in every episode. In order to succeed under these conditions, we combine several Q-learning techniques that are considered to be state-of-the-art. We first provide a description of our model and then verify it through a series of experiments.
ISSN:2375-9356
DOI:10.1109/BigComp.2018.00030