HQA: Hybrid Q-learning and AODV multi-path routing algorithm for Flying Ad-hoc Networks

Reliable and efficient data transmission between Unmanned Aerial Vehicle (UAV) nodes is critical for the control of UAV swarms and relies heavily on effective routing protocols in Flying Ad-hoc Networks (FANETs). However, Q-learning-based FANET routing protocols, which are gaining widespread attenti...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Vehicular Communications Ročník 55; s. 100947
Hlavní autori: Sun, Chen, Hou, Liang, Yu, Suqi, Shu, Jian
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: Elsevier Inc 01.10.2025
Predmet:
ISSN:2214-2096
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Abstract Reliable and efficient data transmission between Unmanned Aerial Vehicle (UAV) nodes is critical for the control of UAV swarms and relies heavily on effective routing protocols in Flying Ad-hoc Networks (FANETs). However, Q-learning-based FANET routing protocols, which are gaining widespread attention, face two significant challenges: 1) insufficient stability of Q-learning leads to unreliable route selection in certain scenarios and higher packet loss rates; 2) in void regions with frequent topology changes and vast path exploration spaces, the slow convergence of Q-learning fails to adapt quickly to dynamic environmental changes, thereby reducing the packet delivery rate (PDR). This paper proposes a hybrid Q-learning/AODV (HQA) multi-path routing algorithm that integrates Q-learning and the AODV protocols to address these challenges. HQA includes a Bayesian stability evaluator for adaptive Q-learning/AODV switching and a dual-update reward mechanism that integrates reliable AODV paths into Q-learning training, enabling rapid void recovery and latency-optimized routing. Experimental results demonstrate HQA's superiority over baseline protocols: Compared to AODV, HQA reduces average end-to-end delay by 13.6–23.9% and improves PDR by 5.4–9.1% in non-void and void states, respectively. It outperforms QMR by 2.2–6.3% in PDR while achieving 25.6% and 53.2% higher average PDR than QMR and AODV across network densities. The hybrid design accelerates convergence by 40% versus standalone Q-learning through AODV-assisted rewards, maintaining scalability under dynamic topology changes. These findings indicate that the HQA algorithm can more rapidly adapt to the rapid changes in FANETs and better handle void regions, offering a promising solution for enhancing the performance and reliability of FANETs.
AbstractList Reliable and efficient data transmission between Unmanned Aerial Vehicle (UAV) nodes is critical for the control of UAV swarms and relies heavily on effective routing protocols in Flying Ad-hoc Networks (FANETs). However, Q-learning-based FANET routing protocols, which are gaining widespread attention, face two significant challenges: 1) insufficient stability of Q-learning leads to unreliable route selection in certain scenarios and higher packet loss rates; 2) in void regions with frequent topology changes and vast path exploration spaces, the slow convergence of Q-learning fails to adapt quickly to dynamic environmental changes, thereby reducing the packet delivery rate (PDR). This paper proposes a hybrid Q-learning/AODV (HQA) multi-path routing algorithm that integrates Q-learning and the AODV protocols to address these challenges. HQA includes a Bayesian stability evaluator for adaptive Q-learning/AODV switching and a dual-update reward mechanism that integrates reliable AODV paths into Q-learning training, enabling rapid void recovery and latency-optimized routing. Experimental results demonstrate HQA's superiority over baseline protocols: Compared to AODV, HQA reduces average end-to-end delay by 13.6–23.9% and improves PDR by 5.4–9.1% in non-void and void states, respectively. It outperforms QMR by 2.2–6.3% in PDR while achieving 25.6% and 53.2% higher average PDR than QMR and AODV across network densities. The hybrid design accelerates convergence by 40% versus standalone Q-learning through AODV-assisted rewards, maintaining scalability under dynamic topology changes. These findings indicate that the HQA algorithm can more rapidly adapt to the rapid changes in FANETs and better handle void regions, offering a promising solution for enhancing the performance and reliability of FANETs.
ArticleNumber 100947
Author Sun, Chen
Hou, Liang
Yu, Suqi
Shu, Jian
Author_xml – sequence: 1
  givenname: Chen
  orcidid: 0000-0002-6783-9631
  surname: Sun
  fullname: Sun, Chen
  email: sunchen@nchu.edu.cn
– sequence: 2
  givenname: Liang
  surname: Hou
  fullname: Hou, Liang
– sequence: 3
  givenname: Suqi
  surname: Yu
  fullname: Yu, Suqi
– sequence: 4
  givenname: Jian
  surname: Shu
  fullname: Shu, Jian
BookMark eNp9kMtOwzAURL0oEgX6Byz8Aym2m4fDAikqlCJVVJV4LC3HuW5ckriy3aL-PSlhzWqkmTujq3OFRp3tAKFbSqaU0PRuNz1CrWw7ZYQlvUXyOBuhMWM0jhjJ00s08X5HCKFZOotJNkafy01xj5en0pkKb6IGpOtMt8Wyq3CxfvzA7aEJJtrLUGNnD-E3a7bWmVC3WFuHF83pbBZVVFuFXyF8W_flb9CFlo2HyZ9eo_fF09t8Ga3Wzy_zYhUplmQhSnOm1IxpzlWZy1yXinDgUvGKppBBWnHIEqUJlFSzBEh_zvMkjxnVUmudzK5RPOwqZ713oMXemVa6k6BEnJmInRiYiDMTMTDpaw9DDfrfjgac8MpAp6AyDlQQlTX_D_wAEetwTQ
Cites_doi 10.1016/j.comnet.2024.110514
10.1109/ACCESS.2023.3244067
10.1016/j.comnet.2021.108379
10.3390/electronics11071099
10.1109/JIOT.2022.3162849
10.1162/NECO_a_00059
10.3390/sym14091787
10.1016/j.comnet.2022.109382
10.1109/MCOM.2017.1700323
10.1007/s11276-023-03534-y
10.23919/JCC.2022.05.005
10.1016/j.jksuci.2024.102066
10.1109/TVT.2021.3074015
10.1016/j.comcom.2019.11.011
10.3390/app12073665
10.1016/j.protcy.2012.05.118
10.1016/j.jksuci.2022.03.021
10.1016/j.icte.2023.07.002
10.1016/j.net.2022.03.015
10.1109/TITS.2022.3145857
10.1016/j.applthermaleng.2024.124209
10.1109/TNSE.2021.3085514
10.1109/JIOT.2021.3089759
10.1016/j.jksuci.2023.101894
10.1631/FITEE.1900401
10.1016/j.jksuci.2023.101817
10.3390/su14158980
10.1016/j.cose.2024.103909
ContentType Journal Article
Copyright 2025 Elsevier Inc.
Copyright_xml – notice: 2025 Elsevier Inc.
DBID AAYXX
CITATION
DOI 10.1016/j.vehcom.2025.100947
DatabaseName CrossRef
DatabaseTitle CrossRef
DatabaseTitleList
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
ExternalDocumentID 10_1016_j_vehcom_2025_100947
S2214209625000749
GroupedDBID --M
.~1
0R~
1~.
4.4
457
4G.
7-5
8P~
AAEDT
AAEDW
AAFJI
AAIKJ
AAKOC
AALRI
AAOAW
AAQFI
AATTM
AAXKI
AAXUO
AAYFN
AAYWO
ABBOA
ABMAC
ABXDB
ACDAQ
ACGFS
ACLOT
ACRLP
ACVFH
ACZNC
ADBBV
ADCNI
ADEZE
AEBSH
AECPX
AEIPS
AEKER
AEUPX
AFJKZ
AFPUW
AFTJW
AGHFR
AGUBO
AHJVU
AIALX
AIEXJ
AIGII
AIIUN
AIKHN
AITUG
AKBMS
AKRWK
AKYEP
ALMA_UNASSIGNED_HOLDINGS
AMRAJ
ANKPU
AOMHK
AOUOD
APXCP
AVARZ
AXJTR
BJAXD
BKOJK
BLXMC
EBS
EFJIC
EFKBS
EFLBG
EJD
FDB
FIRID
FNPLU
FYGXN
GBLVA
GBOLZ
HZ~
JJJVA
KOM
M41
O9-
OAUVE
P-8
P-9
PC.
PRBVW
ROL
SPC
SPCBC
SSB
SSO
SST
SSV
SSZ
T5K
~G-
AAYXX
CITATION
ID FETCH-LOGICAL-c257t-692cc32f88cb9a9fbc08e8ac8d16e7e6d8e75cf0eb1f25e092c8959421fafff53
ISICitedReferencesCount 0
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001522328600001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 2214-2096
IngestDate Sat Nov 29 07:32:38 EST 2025
Sat Oct 11 16:52:01 EDT 2025
IsPeerReviewed false
IsScholarly true
Keywords Flying Ad-hoc Networks (FANETs)
Q-learning
Routing protocol
Ad-hoc on-demand distance vector (AODV)
Language English
LinkModel OpenURL
MergedId FETCHMERGED-LOGICAL-c257t-692cc32f88cb9a9fbc08e8ac8d16e7e6d8e75cf0eb1f25e092c8959421fafff53
ORCID 0000-0002-6783-9631
ParticipantIDs crossref_primary_10_1016_j_vehcom_2025_100947
elsevier_sciencedirect_doi_10_1016_j_vehcom_2025_100947
PublicationCentury 2000
PublicationDate October 2025
2025-10-00
PublicationDateYYYYMMDD 2025-10-01
PublicationDate_xml – month: 10
  year: 2025
  text: October 2025
PublicationDecade 2020
PublicationTitle Vehicular Communications
PublicationYear 2025
Publisher Elsevier Inc
Publisher_xml – name: Elsevier Inc
References Beegum, Idris, Ayub, Shehadeh (br0040) 2023; 11
Das, Parihar, Chakraborty (br0240) 2024
Chen, Lyu, Song, Yang, Jiang (br0350) 2020; 21
Darabkh, Alfawares, Althunibat (br0070) 2019; 18
Hosseinzadeh, Yousefpoor, Yousefpoor, Lansky, Min (br0080) 2024; 36
Liu, Wang, Wang (br0020) 2023; 4
Xiong, Jin, Liu (br0220) 2020; 44
Guo, Gao, Liu, Huang, Zhang, Li, Ma (br0300) 2022; 24
Hosseinzadeh, Ali, Ionescu-Feleaga, Ionescu, Yousefpoor, Yousefpoor, Ahmed, Rahmani, Mehmood (br0310) 2023; 35
Liu, Wang, He (br0100) 2020; 150
Rahmani, Haider, Zaidi, Alanazi, Alsubai, Alqahtani, Yousefpoor, Yousefpoor, Hosseinzadeh (br0130) 2025; 53
Qiu, Xie, Wang, Ye, Yang (br0380) 2021; 15
Liu, Wang, He, Xu (br0180) 2020
Liu, Wang, Xu (br0330) 2022; 218
Ahmadian, Pillow, Paninski (br0430) 2011; 23
Lin, Peng, Zuo, Wang (br0360) 2022; 14
Kuai, Wang (br0010) 2020; 42
Tho, Huong Ly, Binh, Vo (br0140) 2025
Liu, Cui, Zhang, Yang, Hanzo (br0190) 2021; 70
Alam, Moh (br0250) 2023; 40
Arafat, Moh (br0120) 2021; 9
Alam, Moh (br0150) 2022; 11
Kim, Lee (br0420) 2022; 54
Mansour, Mutar, Aziz, Mostafa, Mahdin, Abbas, Hassan, Abdulsattar, Jubair (br0400) 2022; 14
Yang, Qiao, Lee (br0410) 2024
Lin, Huang, Hawbani, Zhao, Tang, Guan, Sun (br0370) 2024; 249
Li, Tang, Chang, Wu, Zhou, Wang (br0230) 2023; 46
Joon, Tomar (br0160) 2022; 34
Awaji, Kamruzzaman, Althuniabt, Aqeel, Khormi, Gopalsamy, Allimuthu (br0050) 2024; 30
Zheng, Sangaiah, Wang (br0270) 2018; 56
Sliwa, Schüler, Patchou, Wietfeld (br0280) 2021
Rovira-Sugranes, Afghah, Qu, Razi (br0260) 2021; 8
da Costa, Kunst, de Freitas (br0110) 2021; 198
Sharma, Roberts (br0210) 2012; 4
Huang, Chen, Zheng, Tan, Liu (br0320) 2022; 39
Kumar, Siddiqui, Ganguly, Talukdar (br0440) 2024; 257
Hosseinzadeh, Tanveer, Rahmani, Aurangzeb, Yousefpoor, Yousefpoor, Darwesh, Lee, Fazlali (br0030) 2024; 36
Rahmani, Ali, Yousefpoor, Yousefpoor, Javaheri, Lalbakhsh, Ahmed, Hosseinzadeh, Lee (br0060) 2022; 36
Duong (br0170) 2024; 10
Sun, Mo, Shu (br0090) 2023; 40
Zhang, Dong, Feng, Guan, Chen, Wu (br0290) 2022; 19
Khan, Yau, Ling, Imran, Chong (br0340) 2022; 12
Feyzi, Sattari-Naeini (br0390) 2015
Cui, Zhang, Feng, Wei, Shi, Yang (br0200) 2022; 9
Hosseinzadeh (10.1016/j.vehcom.2025.100947_br0310) 2023; 35
Duong (10.1016/j.vehcom.2025.100947_br0170) 2024; 10
Tho (10.1016/j.vehcom.2025.100947_br0140) 2025
Arafat (10.1016/j.vehcom.2025.100947_br0120) 2021; 9
Guo (10.1016/j.vehcom.2025.100947_br0300) 2022; 24
Lin (10.1016/j.vehcom.2025.100947_br0360) 2022; 14
Ahmadian (10.1016/j.vehcom.2025.100947_br0430) 2011; 23
Alam (10.1016/j.vehcom.2025.100947_br0250) 2023; 40
Sharma (10.1016/j.vehcom.2025.100947_br0210) 2012; 4
Beegum (10.1016/j.vehcom.2025.100947_br0040) 2023; 11
Rovira-Sugranes (10.1016/j.vehcom.2025.100947_br0260) 2021; 8
Lin (10.1016/j.vehcom.2025.100947_br0370) 2024; 249
Qiu (10.1016/j.vehcom.2025.100947_br0380) 2021; 15
da Costa (10.1016/j.vehcom.2025.100947_br0110) 2021; 198
Rahmani (10.1016/j.vehcom.2025.100947_br0130) 2025; 53
Joon (10.1016/j.vehcom.2025.100947_br0160) 2022; 34
Li (10.1016/j.vehcom.2025.100947_br0230) 2023; 46
Huang (10.1016/j.vehcom.2025.100947_br0320) 2022; 39
Hosseinzadeh (10.1016/j.vehcom.2025.100947_br0080) 2024; 36
Liu (10.1016/j.vehcom.2025.100947_br0100) 2020; 150
Kuai (10.1016/j.vehcom.2025.100947_br0010) 2020; 42
Rahmani (10.1016/j.vehcom.2025.100947_br0060) 2022; 36
Liu (10.1016/j.vehcom.2025.100947_br0330) 2022; 218
Awaji (10.1016/j.vehcom.2025.100947_br0050) 2024; 30
Chen (10.1016/j.vehcom.2025.100947_br0350) 2020; 21
Feyzi (10.1016/j.vehcom.2025.100947_br0390) 2015
Zhang (10.1016/j.vehcom.2025.100947_br0290) 2022; 19
Liu (10.1016/j.vehcom.2025.100947_br0190) 2021; 70
Zheng (10.1016/j.vehcom.2025.100947_br0270) 2018; 56
Hosseinzadeh (10.1016/j.vehcom.2025.100947_br0030) 2024; 36
Kumar (10.1016/j.vehcom.2025.100947_br0440) 2024; 257
Cui (10.1016/j.vehcom.2025.100947_br0200) 2022; 9
Darabkh (10.1016/j.vehcom.2025.100947_br0070) 2019; 18
Khan (10.1016/j.vehcom.2025.100947_br0340) 2022; 12
Sun (10.1016/j.vehcom.2025.100947_br0090) 2023; 40
Sliwa (10.1016/j.vehcom.2025.100947_br0280) 2021
Liu (10.1016/j.vehcom.2025.100947_br0180) 2020
Mansour (10.1016/j.vehcom.2025.100947_br0400) 2022; 14
Yang (10.1016/j.vehcom.2025.100947_br0410) 2024
Liu (10.1016/j.vehcom.2025.100947_br0020) 2023; 4
Das (10.1016/j.vehcom.2025.100947_br0240) 2024
Alam (10.1016/j.vehcom.2025.100947_br0150) 2022; 11
Kim (10.1016/j.vehcom.2025.100947_br0420) 2022; 54
Xiong (10.1016/j.vehcom.2025.100947_br0220) 2020; 44
References_xml – volume: 40
  year: 2023
  ident: br0250
  article-title: Q-learning-based routing inspired by adaptive flocking control for collaborative unmanned aerial vehicle swarms
  publication-title: Veh. Commun.
– volume: 257
  year: 2024
  ident: br0440
  article-title: Accurate heat flux estimation in continuous casting molds via mh-mcmc Bayesian inverse method
  publication-title: Appl. Therm. Eng.
– volume: 18
  year: 2019
  ident: br0070
  article-title: MDRMA: multi-data rate mobility-aware aodv-based protocol for flying ad-hoc networks
  publication-title: Veh. Commun.
– volume: 9
  start-page: 1985
  year: 2021
  end-page: 2000
  ident: br0120
  article-title: A Q-learning-based topology-aware routing protocol for flying ad hoc networks
  publication-title: IEEE Internet Things J.
– volume: 34
  start-page: 6989
  year: 2022
  end-page: 7000
  ident: br0160
  article-title: Energy aware q-learning aodv (eaq-aodv) routing for cognitive radio sensor networks
  publication-title: J. King Saud Univ, Comput. Inf. Sci.
– start-page: 684
  year: 2015
  end-page: 687
  ident: br0390
  article-title: Application of fuzzy logic for selecting the route in AODV routing protocol for vehicular ad hoc networks
  publication-title: 2015 23rd Iranian Conference on Electrical Engineering
– volume: 9
  start-page: 18632
  year: 2022
  end-page: 18649
  ident: br0200
  article-title: Topology-aware resilient routing protocol for FANETs: an adaptive Q-learning approach
  publication-title: IEEE Internet Things J.
– volume: 56
  start-page: 136
  year: 2018
  end-page: 142
  ident: br0270
  article-title: Adaptive communication protocols in flying ad hoc network
  publication-title: IEEE Commun. Mag.
– volume: 12
  start-page: 3665
  year: 2022
  ident: br0340
  article-title: An intelligent cluster-based routing scheme in 5g flying ad hoc networks
  publication-title: Appl. Sci.
– volume: 15
  start-page: 4244
  year: 2021
  end-page: 4274
  ident: br0380
  article-title: QLGR: a Q-learning-based geographic FANET routing algorithm based on multi-agent reinforcement learning
  publication-title: KSII Trans. Int. Inf. Syst.
– volume: 21
  start-page: 1308
  year: 2020
  end-page: 1320
  ident: br0350
  article-title: A traffic-aware q-network enhanced routing protocol based on gpsr for unmanned aerial vehicle ad-hoc networks
  publication-title: Front. Inf. Technol. Electron. Eng.
– volume: 14
  start-page: 1787
  year: 2022
  ident: br0360
  article-title: Deep-reinforcement-learning-based intelligent routing strategy for FANETs
  publication-title: Symmetry
– volume: 23
  start-page: 46
  year: 2011
  end-page: 96
  ident: br0430
  article-title: Efficient Markov chain Monte Carlo methods for decoding neural spike trains
  publication-title: Neural Comput.
– volume: 39
  start-page: 134
  year: 2022
  end-page: 143
  ident: br0320
  article-title: Q-learning based QoS routing for high dynamic flying Ad Hoc networks
  publication-title: J. Univ. Chin. Acad. Sci.
– volume: 198
  year: 2021
  ident: br0110
  article-title: Q-FANET: improved Q-learning based routing protocol for FANETs
  publication-title: Comput. Netw.
– start-page: 1
  year: 2021
  end-page: 7
  ident: br0280
  article-title: Parrot: predictive ad-hoc routing fueled by reinforcement learning and trajectory knowledge
  publication-title: 2021 IEEE 93rd Vehicular Technology Conference (VTC2021-Spring)
– volume: 44
  start-page: 66
  year: 2020
  end-page: 73
  ident: br0220
  article-title: QL-OLSR: an optimization routing protocol in MANET based on Q-learning
  publication-title: J. Beijing Jiaotong Univ.
– year: 2024
  ident: br0410
  article-title: Towards trustworthy cybersecurity operations using Bayesian deep learning to improve uncertainty quantification of anomaly detection
  publication-title: Comput. Secur.
– start-page: 1
  year: 2024
  end-page: 20
  ident: br0240
  article-title: Q-FANETGS-BS: a six-state routing model in FANET for performing efficient data transfer
  publication-title: Wirel. Pers. Commun.
– volume: 40
  start-page: 1937
  year: 2023
  end-page: 1946
  ident: br0090
  article-title: Review of research on flying ad-hoc networks routing based on reinforcement learning
  publication-title: Appl. Res. Comput.
– year: 2025
  ident: br0140
  article-title: Qlr-fanet: aq-learning and rate control-based routing protocol for flying ad hoc network
  publication-title: ETRI J.
– volume: 11
  start-page: 1099
  year: 2022
  ident: br0150
  article-title: Survey on q-learning-based position-aware routing protocols in flying ad hoc networks
  publication-title: Electronics
– volume: 53
  year: 2025
  ident: br0130
  article-title: QRCF: a new q-learning-based routing approach using a smart cylindrical filtering system in flying ad hoc networks
  publication-title: Veh. Commun.
– volume: 14
  start-page: 8980
  year: 2022
  ident: br0400
  article-title: Cross-layer and energy-aware aodv routing protocol for flying ad-hoc networks
  publication-title: Sustainability
– volume: 4
  start-page: 727
  year: 2012
  end-page: 731
  ident: br0210
  article-title: Effects of velocity on performance of DYMO, AODV and DSR routing protocols in mobile Ad-hoc networks
  publication-title: Proc. Technol.
– volume: 36
  year: 2024
  ident: br0030
  article-title: A Q-learning-based smart clustering routing method in flying Ad Hoc networks
  publication-title: J. King Saud Univ, Comput. Inf. Sci.
– volume: 46
  start-page: 91
  year: 2023
  end-page: 97
  ident: br0230
  article-title: Q-learning based improvement method of UAV Ad hoc network AODV stable routing
  publication-title: Mod. Electron. Tech.
– volume: 42
  year: 2020
  ident: br0010
  article-title: Stepwise routing algorithm in mobile ad hoc network under reinforcement learning framework
  publication-title: J. Nat. Univ. Defense Technol.
– start-page: 465
  year: 2020
  end-page: 468
  ident: br0180
  article-title: Ardeep: adaptive and reliable routing protocol for mobile robotic networks with deep reinforcement learning
  publication-title: 2020 IEEE 45th Conference on Local Computer Networks (LCN)
– volume: 8
  start-page: 2223
  year: 2021
  end-page: 2234
  ident: br0260
  article-title: Fully-echoed q-routing with simulated annealing inference for flying adhoc networks
  publication-title: IEEE Trans. Netw. Sci. Eng.
– volume: 30
  start-page: 987
  year: 2024
  end-page: 1011
  ident: br0050
  article-title: Novel multiple access protocols against q-learning-based tunnel monitoring using flying ad hoc networks
  publication-title: Wirel. Netw.
– volume: 218
  year: 2022
  ident: br0330
  article-title: AR-GAIL: adaptive routing protocol for FANETs using generative adversarial imitation learning
  publication-title: Comput. Netw.
– volume: 4
  start-page: 113
  year: 2023
  end-page: 121
  ident: br0020
  article-title: PARouting: prediction-supported adaptive routing protocol for FANETs with deep reinforcement learning
  publication-title: Int. J. Intell. Netw.
– volume: 24
  start-page: 2447
  year: 2022
  end-page: 2460
  ident: br0300
  article-title: Icra: an intelligent clustering routing approach for uav ad hoc networks
  publication-title: IEEE Trans. Intell. Transp. Syst.
– volume: 70
  start-page: 5166
  year: 2021
  end-page: 5171
  ident: br0190
  article-title: Deep reinforcement learning aided packet-routing for aeronautical ad-hoc networks formed by passenger planes
  publication-title: IEEE Trans. Veh. Technol.
– volume: 11
  start-page: 15588
  year: 2023
  end-page: 15622
  ident: br0040
  article-title: Optimized routing of UAVs using bio-inspired algorithm in FANET: a systematic review
  publication-title: IEEE Access
– volume: 36
  year: 2022
  ident: br0060
  article-title: OLSR+: a new routing method based on fuzzy logic in flying ad-hoc networks (FANETs)
  publication-title: Veh. Commun.
– volume: 19
  start-page: 302
  year: 2022
  end-page: 317
  ident: br0290
  article-title: Adaptive 3d routing protocol for flying ad hoc networks based on prediction-driven q-learning
  publication-title: China Commun.
– volume: 54
  start-page: 2941
  year: 2022
  end-page: 2959
  ident: br0420
  article-title: Optimal Bayesian mcmc based fire brigade non-suppression probability model considering uncertainty of parameters
  publication-title: Nucl. Eng. Technol.
– volume: 150
  start-page: 304
  year: 2020
  end-page: 316
  ident: br0100
  article-title: QMR: Q-learning based multi-objective optimization routing protocol for flying ad hoc networks
  publication-title: Comput. Commun.
– volume: 35
  year: 2023
  ident: br0310
  article-title: A novel Q-learning-based routing scheme using an intelligent filtering algorithm for flying ad hoc networks (FANETs)
  publication-title: J. King Saud Univ, Comput. Inf. Sci.
– volume: 10
  start-page: 97
  year: 2024
  end-page: 103
  ident: br0170
  article-title: An improved method of aodv routing protocol using reinforcement learning for ensuring qos in 5g-based mobile ad-hoc networks
  publication-title: ICT Express
– volume: 249
  year: 2024
  ident: br0370
  article-title: Joint routing and computation offloading based deep reinforcement learning for Flying Ad hoc Networks
  publication-title: Comput. Netw.
– volume: 36
  year: 2024
  ident: br0080
  article-title: A new version of the greedy perimeter stateless routing scheme in flying ad hoc networks
  publication-title: J. King Saud Univ, Comput. Inf. Sci.
– volume: 249
  year: 2024
  ident: 10.1016/j.vehcom.2025.100947_br0370
  article-title: Joint routing and computation offloading based deep reinforcement learning for Flying Ad hoc Networks
  publication-title: Comput. Netw.
  doi: 10.1016/j.comnet.2024.110514
– volume: 11
  start-page: 15588
  year: 2023
  ident: 10.1016/j.vehcom.2025.100947_br0040
  article-title: Optimized routing of UAVs using bio-inspired algorithm in FANET: a systematic review
  publication-title: IEEE Access
  doi: 10.1109/ACCESS.2023.3244067
– volume: 198
  year: 2021
  ident: 10.1016/j.vehcom.2025.100947_br0110
  article-title: Q-FANET: improved Q-learning based routing protocol for FANETs
  publication-title: Comput. Netw.
  doi: 10.1016/j.comnet.2021.108379
– volume: 42
  issue: 4
  year: 2020
  ident: 10.1016/j.vehcom.2025.100947_br0010
  article-title: Stepwise routing algorithm in mobile ad hoc network under reinforcement learning framework
  publication-title: J. Nat. Univ. Defense Technol.
– year: 2025
  ident: 10.1016/j.vehcom.2025.100947_br0140
  article-title: Qlr-fanet: aq-learning and rate control-based routing protocol for flying ad hoc network
  publication-title: ETRI J.
– volume: 11
  start-page: 1099
  issue: 7
  year: 2022
  ident: 10.1016/j.vehcom.2025.100947_br0150
  article-title: Survey on q-learning-based position-aware routing protocols in flying ad hoc networks
  publication-title: Electronics
  doi: 10.3390/electronics11071099
– volume: 9
  start-page: 18632
  issue: 19
  year: 2022
  ident: 10.1016/j.vehcom.2025.100947_br0200
  article-title: Topology-aware resilient routing protocol for FANETs: an adaptive Q-learning approach
  publication-title: IEEE Internet Things J.
  doi: 10.1109/JIOT.2022.3162849
– volume: 18
  year: 2019
  ident: 10.1016/j.vehcom.2025.100947_br0070
  article-title: MDRMA: multi-data rate mobility-aware aodv-based protocol for flying ad-hoc networks
  publication-title: Veh. Commun.
– volume: 23
  start-page: 46
  issue: 1
  year: 2011
  ident: 10.1016/j.vehcom.2025.100947_br0430
  article-title: Efficient Markov chain Monte Carlo methods for decoding neural spike trains
  publication-title: Neural Comput.
  doi: 10.1162/NECO_a_00059
– volume: 14
  start-page: 1787
  issue: 9
  year: 2022
  ident: 10.1016/j.vehcom.2025.100947_br0360
  article-title: Deep-reinforcement-learning-based intelligent routing strategy for FANETs
  publication-title: Symmetry
  doi: 10.3390/sym14091787
– volume: 36
  year: 2022
  ident: 10.1016/j.vehcom.2025.100947_br0060
  article-title: OLSR+: a new routing method based on fuzzy logic in flying ad-hoc networks (FANETs)
  publication-title: Veh. Commun.
– volume: 218
  year: 2022
  ident: 10.1016/j.vehcom.2025.100947_br0330
  article-title: AR-GAIL: adaptive routing protocol for FANETs using generative adversarial imitation learning
  publication-title: Comput. Netw.
  doi: 10.1016/j.comnet.2022.109382
– volume: 40
  start-page: 1937
  issue: 07
  year: 2023
  ident: 10.1016/j.vehcom.2025.100947_br0090
  article-title: Review of research on flying ad-hoc networks routing based on reinforcement learning
  publication-title: Appl. Res. Comput.
– volume: 56
  start-page: 136
  issue: 1
  year: 2018
  ident: 10.1016/j.vehcom.2025.100947_br0270
  article-title: Adaptive communication protocols in flying ad hoc network
  publication-title: IEEE Commun. Mag.
  doi: 10.1109/MCOM.2017.1700323
– volume: 15
  start-page: 4244
  issue: 11
  year: 2021
  ident: 10.1016/j.vehcom.2025.100947_br0380
  article-title: QLGR: a Q-learning-based geographic FANET routing algorithm based on multi-agent reinforcement learning
  publication-title: KSII Trans. Int. Inf. Syst.
– volume: 30
  start-page: 987
  issue: 2
  year: 2024
  ident: 10.1016/j.vehcom.2025.100947_br0050
  article-title: Novel multiple access protocols against q-learning-based tunnel monitoring using flying ad hoc networks
  publication-title: Wirel. Netw.
  doi: 10.1007/s11276-023-03534-y
– volume: 19
  start-page: 302
  issue: 5
  year: 2022
  ident: 10.1016/j.vehcom.2025.100947_br0290
  article-title: Adaptive 3d routing protocol for flying ad hoc networks based on prediction-driven q-learning
  publication-title: China Commun.
  doi: 10.23919/JCC.2022.05.005
– volume: 36
  issue: 5
  year: 2024
  ident: 10.1016/j.vehcom.2025.100947_br0080
  article-title: A new version of the greedy perimeter stateless routing scheme in flying ad hoc networks
  publication-title: J. King Saud Univ, Comput. Inf. Sci.
  doi: 10.1016/j.jksuci.2024.102066
– volume: 70
  start-page: 5166
  issue: 5
  year: 2021
  ident: 10.1016/j.vehcom.2025.100947_br0190
  article-title: Deep reinforcement learning aided packet-routing for aeronautical ad-hoc networks formed by passenger planes
  publication-title: IEEE Trans. Veh. Technol.
  doi: 10.1109/TVT.2021.3074015
– volume: 4
  start-page: 113
  year: 2023
  ident: 10.1016/j.vehcom.2025.100947_br0020
  article-title: PARouting: prediction-supported adaptive routing protocol for FANETs with deep reinforcement learning
  publication-title: Int. J. Intell. Netw.
– volume: 150
  start-page: 304
  year: 2020
  ident: 10.1016/j.vehcom.2025.100947_br0100
  article-title: QMR: Q-learning based multi-objective optimization routing protocol for flying ad hoc networks
  publication-title: Comput. Commun.
  doi: 10.1016/j.comcom.2019.11.011
– volume: 12
  start-page: 3665
  issue: 7
  year: 2022
  ident: 10.1016/j.vehcom.2025.100947_br0340
  article-title: An intelligent cluster-based routing scheme in 5g flying ad hoc networks
  publication-title: Appl. Sci.
  doi: 10.3390/app12073665
– volume: 53
  year: 2025
  ident: 10.1016/j.vehcom.2025.100947_br0130
  article-title: QRCF: a new q-learning-based routing approach using a smart cylindrical filtering system in flying ad hoc networks
  publication-title: Veh. Commun.
– volume: 4
  start-page: 727
  year: 2012
  ident: 10.1016/j.vehcom.2025.100947_br0210
  article-title: Effects of velocity on performance of DYMO, AODV and DSR routing protocols in mobile Ad-hoc networks
  publication-title: Proc. Technol.
  doi: 10.1016/j.protcy.2012.05.118
– start-page: 684
  year: 2015
  ident: 10.1016/j.vehcom.2025.100947_br0390
  article-title: Application of fuzzy logic for selecting the route in AODV routing protocol for vehicular ad hoc networks
– volume: 34
  start-page: 6989
  issue: 9
  year: 2022
  ident: 10.1016/j.vehcom.2025.100947_br0160
  article-title: Energy aware q-learning aodv (eaq-aodv) routing for cognitive radio sensor networks
  publication-title: J. King Saud Univ, Comput. Inf. Sci.
  doi: 10.1016/j.jksuci.2022.03.021
– volume: 46
  start-page: 91
  issue: 06
  year: 2023
  ident: 10.1016/j.vehcom.2025.100947_br0230
  article-title: Q-learning based improvement method of UAV Ad hoc network AODV stable routing
  publication-title: Mod. Electron. Tech.
– volume: 10
  start-page: 97
  issue: 1
  year: 2024
  ident: 10.1016/j.vehcom.2025.100947_br0170
  article-title: An improved method of aodv routing protocol using reinforcement learning for ensuring qos in 5g-based mobile ad-hoc networks
  publication-title: ICT Express
  doi: 10.1016/j.icte.2023.07.002
– start-page: 1
  year: 2024
  ident: 10.1016/j.vehcom.2025.100947_br0240
  article-title: Q-FANETGS-BS: a six-state routing model in FANET for performing efficient data transfer
  publication-title: Wirel. Pers. Commun.
– volume: 54
  start-page: 2941
  issue: 8
  year: 2022
  ident: 10.1016/j.vehcom.2025.100947_br0420
  article-title: Optimal Bayesian mcmc based fire brigade non-suppression probability model considering uncertainty of parameters
  publication-title: Nucl. Eng. Technol.
  doi: 10.1016/j.net.2022.03.015
– volume: 24
  start-page: 2447
  issue: 2
  year: 2022
  ident: 10.1016/j.vehcom.2025.100947_br0300
  article-title: Icra: an intelligent clustering routing approach for uav ad hoc networks
  publication-title: IEEE Trans. Intell. Transp. Syst.
  doi: 10.1109/TITS.2022.3145857
– volume: 257
  year: 2024
  ident: 10.1016/j.vehcom.2025.100947_br0440
  article-title: Accurate heat flux estimation in continuous casting molds via mh-mcmc Bayesian inverse method
  publication-title: Appl. Therm. Eng.
  doi: 10.1016/j.applthermaleng.2024.124209
– volume: 44
  start-page: 66
  issue: 02
  year: 2020
  ident: 10.1016/j.vehcom.2025.100947_br0220
  article-title: QL-OLSR: an optimization routing protocol in MANET based on Q-learning
  publication-title: J. Beijing Jiaotong Univ.
– start-page: 465
  year: 2020
  ident: 10.1016/j.vehcom.2025.100947_br0180
  article-title: Ardeep: adaptive and reliable routing protocol for mobile robotic networks with deep reinforcement learning
– volume: 8
  start-page: 2223
  issue: 3
  year: 2021
  ident: 10.1016/j.vehcom.2025.100947_br0260
  article-title: Fully-echoed q-routing with simulated annealing inference for flying adhoc networks
  publication-title: IEEE Trans. Netw. Sci. Eng.
  doi: 10.1109/TNSE.2021.3085514
– volume: 9
  start-page: 1985
  issue: 3
  year: 2021
  ident: 10.1016/j.vehcom.2025.100947_br0120
  article-title: A Q-learning-based topology-aware routing protocol for flying ad hoc networks
  publication-title: IEEE Internet Things J.
  doi: 10.1109/JIOT.2021.3089759
– volume: 36
  issue: 1
  year: 2024
  ident: 10.1016/j.vehcom.2025.100947_br0030
  article-title: A Q-learning-based smart clustering routing method in flying Ad Hoc networks
  publication-title: J. King Saud Univ, Comput. Inf. Sci.
  doi: 10.1016/j.jksuci.2023.101894
– volume: 39
  start-page: 134
  issue: 01
  year: 2022
  ident: 10.1016/j.vehcom.2025.100947_br0320
  article-title: Q-learning based QoS routing for high dynamic flying Ad Hoc networks
  publication-title: J. Univ. Chin. Acad. Sci.
– volume: 40
  year: 2023
  ident: 10.1016/j.vehcom.2025.100947_br0250
  article-title: Q-learning-based routing inspired by adaptive flocking control for collaborative unmanned aerial vehicle swarms
  publication-title: Veh. Commun.
– volume: 21
  start-page: 1308
  issue: 9
  year: 2020
  ident: 10.1016/j.vehcom.2025.100947_br0350
  article-title: A traffic-aware q-network enhanced routing protocol based on gpsr for unmanned aerial vehicle ad-hoc networks
  publication-title: Front. Inf. Technol. Electron. Eng.
  doi: 10.1631/FITEE.1900401
– start-page: 1
  year: 2021
  ident: 10.1016/j.vehcom.2025.100947_br0280
  article-title: Parrot: predictive ad-hoc routing fueled by reinforcement learning and trajectory knowledge
– volume: 35
  issue: 10
  year: 2023
  ident: 10.1016/j.vehcom.2025.100947_br0310
  article-title: A novel Q-learning-based routing scheme using an intelligent filtering algorithm for flying ad hoc networks (FANETs)
  publication-title: J. King Saud Univ, Comput. Inf. Sci.
  doi: 10.1016/j.jksuci.2023.101817
– volume: 14
  start-page: 8980
  issue: 15
  year: 2022
  ident: 10.1016/j.vehcom.2025.100947_br0400
  article-title: Cross-layer and energy-aware aodv routing protocol for flying ad-hoc networks
  publication-title: Sustainability
  doi: 10.3390/su14158980
– year: 2024
  ident: 10.1016/j.vehcom.2025.100947_br0410
  article-title: Towards trustworthy cybersecurity operations using Bayesian deep learning to improve uncertainty quantification of anomaly detection
  publication-title: Comput. Secur.
  doi: 10.1016/j.cose.2024.103909
SSID ssj0001763407
Score 2.3322244
Snippet Reliable and efficient data transmission between Unmanned Aerial Vehicle (UAV) nodes is critical for the control of UAV swarms and relies heavily on effective...
SourceID crossref
elsevier
SourceType Index Database
Publisher
StartPage 100947
SubjectTerms Ad-hoc on-demand distance vector (AODV)
Flying Ad-hoc Networks (FANETs)
Q-learning
Routing protocol
Title HQA: Hybrid Q-learning and AODV multi-path routing algorithm for Flying Ad-hoc Networks
URI https://dx.doi.org/10.1016/j.vehcom.2025.100947
Volume 55
WOSCitedRecordID wos001522328600001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVESC
  databaseName: Elsevier SD Freedom Collection Journals 2021
  issn: 2214-2096
  databaseCode: AIEXJ
  dateStart: 20140101
  customDbUrl:
  isFulltext: true
  dateEnd: 99991231
  titleUrlDefault: https://www.sciencedirect.com
  omitProxy: false
  ssIdentifier: ssj0001763407
  providerName: Elsevier
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1LT9wwELZa6KE9VKUPAQXkQ2_IKPHGG5vbqgVtOdCiUrq3yHFssmhJ6D4Q_fcdP5IsD6Fy6CWKrGQS5fs0noy_8SD0qR8JHRcS_k2o4iTJk5xIxguicxHlPU0B98g1m0iPj_loJL6HhP7MtRNIq4rf3Iir_wo1jAHYtnT2CXC3RmEAzgF0OALscPwn4IcnLmc-_GNrsXZPyKRJftgU-eDblzMvIiS2F_HutF443bOcnNfT8by8dLLDw4krfhoUpKyVrQm26q3Zchx7psuxV7DeqjBpA_Qfi7CY31WaDeuFTwLIMFm6dmBeGfR73N5YuqGjhrUhIUFZK20LfovSOAFEfKPaxskytuQlY6tnTB904D6XcLF3rUur5rH297rLb--XfWcea9WFjXDtIvNWMmsl81aeo1WaMgEufHXw9WB01OXjwNEmrrq-ff-m0tLJAe-_0MORzFJ0cvoGvQ6_FXjg6bCGnunqLXq1tNnkO_QLiLGPPS1wRwsMtMCWFrijBQ60wC0tMNACe1pgTwvc0OI9-nl4cPp5SEJbDaLAP89JX1CletRwrnIhhclVxDWXihdxX6e6X3CdMmUimMUNZTqCy7lgIqGxkcYY1vuAVqq60usIQzRpIEanUus4kXnBlV3Fzpni2m46RDcQaT5RduV3T8keQ2cDpc13zEIE6CO7DPjx6J2bT3zSR_Sy4-4WWplPF3obvVDX8_FsuhPI8Rc_FHxF
linkProvider Elsevier
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=HQA%3A+Hybrid+Q-learning+and+AODV+multi-path+routing+algorithm+for+Flying+Ad-hoc+Networks&rft.jtitle=Vehicular+Communications&rft.au=Sun%2C+Chen&rft.au=Hou%2C+Liang&rft.au=Yu%2C+Suqi&rft.au=Shu%2C+Jian&rft.date=2025-10-01&rft.issn=2214-2096&rft.volume=55&rft.spage=100947&rft_id=info:doi/10.1016%2Fj.vehcom.2025.100947&rft.externalDBID=n%2Fa&rft.externalDocID=10_1016_j_vehcom_2025_100947
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2214-2096&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2214-2096&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2214-2096&client=summon