Payload Transporting With Two Quadrotors by Centralized Reinforcement Learning Method
Nowadays, quadrotors find applications in automation and artificial intelligence. Among diverse quadrotor studies, payload transport stands out, posing implementation challenges. Using multiple quadrotors reduces per-quadrotor load while increasing system complexity. Inspired by model-free reinforce...
Saved in:
| Published in: | IEEE transactions on aerospace and electronic systems Vol. 60; no. 1; pp. 239 - 251 |
|---|---|
| Main Authors: | , , , , |
| Format: | Journal Article |
| Language: | English |
| Published: |
New York
IEEE
01.02.2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Subjects: | |
| ISSN: | 0018-9251, 1557-9603 |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Nowadays, quadrotors find applications in automation and artificial intelligence. Among diverse quadrotor studies, payload transport stands out, posing implementation challenges. Using multiple quadrotors reduces per-quadrotor load while increasing system complexity. Inspired by model-free reinforcement learning, we apply it to position control in a nonlinear two-quadrotor payload system. Our approach employs a reinforcement learning agent guided by the twin delay deep deterministic policy gradient (TD3) algorithm. Its goal is accurate cable-suspended payload delivery and system stabilization. We test the method's robustness by adding noise. Simulation results show that TD3 excels in ideal conditions and handles noise during training and testing, highlighting its effectiveness. This article's scope can be expanded to encompass scenarios involving three or more quadrotors, providing valuable insights for future endeavors. |
|---|---|
| Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ISSN: | 0018-9251 1557-9603 |
| DOI: | 10.1109/TAES.2023.3321260 |