Wildfire Front Monitoring With Multiple UAVs Using Deep Q-Learning
Uloženo v:
| Název: | Wildfire Front Monitoring With Multiple UAVs Using Deep Q-Learning |
|---|---|
| Autoři: | Alberto Viseras, Michael Meissner, Juan Marchal |
| Zdroj: | IEEE Access, Vol 13, Pp 123269-123281 (2025) |
| Informace o vydavateli: | Institute of Electrical and Electronics Engineers (IEEE), 2025. |
| Rok vydání: | 2025 |
| Témata: | 0209 industrial biotechnology, multi-robot systems, intelligent robots, 02 engineering and technology, Intelligent robots, Unmanned aerial vehicles, TK1-9971, mobile robots, 13. Climate action, Multi-robot systems, Mobile robots, 0202 electrical engineering, electronic engineering, information engineering, Electrical engineering. Electronics. Nuclear engineering, unmanned aerial vehicles, Robot learning, Robot control, robot control |
| Popis: | Wildfires destroy thousands of hectares every summer all over the globe. To provide an effective response and to mitigate wildfires impact, firefighters require a real-time monitoring of the fire front. This article proposes a cooperative reinforcement learning (RL) framework that allows a team of autonomous unmanned aerial vehicles (UAVs) to learn how to monitor a fire front. In the literature, independent Q-learners were proposed to solve a wildfire monitoring task with two UAVs. Here we propose a framework that can be easily extended to a larger number of UAVs. Our framework builds on two methods: multiple single trained Q-learning agents (MSTA) and value decomposition networks (VDN). MSTA trains a single UAV controller, which is then “copied” to each of the UAVs in the team. In contrast, VDN trains agents to learn how to cooperate. We benchmarked in simulations our two considered methods – MSTA and VDN – against two state-of-the-art approaches: independent Q-learners and a joint Q-learner. Simulation results show that our considered methods outperform state-of-the-art approaches in a wildfire front monitoring task with up to 9 fixed-wing and multi-copter UAVs. |
| Druh dokumentu: | Article Conference object |
| ISSN: | 2169-3536 |
| DOI: | 10.1109/access.2021.3055651 |
| Přístupová URL adresa: | https://ieeexplore.ieee.org/ielx7/6287639/6514899/09340340.pdf https://doaj.org/article/da398fa0b1ba4758b3db178df5304204 https://ieeexplore.ieee.org/abstract/document/9340340 https://elib.dlr.de/140953/ |
| Rights: | CC BY CC BY NC ND |
| Přístupové číslo: | edsair.doi.dedup.....76ed5b4fe8f085e6c5ae6d663028877f |
| Databáze: | OpenAIRE |
| Abstrakt: | Wildfires destroy thousands of hectares every summer all over the globe. To provide an effective response and to mitigate wildfires impact, firefighters require a real-time monitoring of the fire front. This article proposes a cooperative reinforcement learning (RL) framework that allows a team of autonomous unmanned aerial vehicles (UAVs) to learn how to monitor a fire front. In the literature, independent Q-learners were proposed to solve a wildfire monitoring task with two UAVs. Here we propose a framework that can be easily extended to a larger number of UAVs. Our framework builds on two methods: multiple single trained Q-learning agents (MSTA) and value decomposition networks (VDN). MSTA trains a single UAV controller, which is then “copied” to each of the UAVs in the team. In contrast, VDN trains agents to learn how to cooperate. We benchmarked in simulations our two considered methods – MSTA and VDN – against two state-of-the-art approaches: independent Q-learners and a joint Q-learner. Simulation results show that our considered methods outperform state-of-the-art approaches in a wildfire front monitoring task with up to 9 fixed-wing and multi-copter UAVs. |
|---|---|
| ISSN: | 21693536 |
| DOI: | 10.1109/access.2021.3055651 |
Full Text Finder
Nájsť tento článok vo Web of Science