Rescale-Invariant Federated Reinforcement Learning for Resource Allocation in V2X Networks

Federated Reinforcement Learning (FRL) offers a promising solution to various practical challenges in resource allocation for vehicle-to-everything (V2X) networks. However, the data discrepancy among individual agents can significantly degrade the performance of FRL-based algorithms. To address this...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE communications letters Ročník 28; číslo 12; s. 2799 - 2803
Hlavní autoři: Xu, Kaidi, Zhou, Shenglong, Ye Li, Geoffrey
Médium: Journal Article
Jazyk:angličtina
Vydáno: New York IEEE 01.12.2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Témata:
ISSN:1089-7798, 1558-2558
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Federated Reinforcement Learning (FRL) offers a promising solution to various practical challenges in resource allocation for vehicle-to-everything (V2X) networks. However, the data discrepancy among individual agents can significantly degrade the performance of FRL-based algorithms. To address this limitation, we exploit the node-wise invariance property of rectified linear unit-activated neural networks, with the aim of reducing data discrepancy to improve learning performance. Based on this property, we introduce a backward rescale-invariant operation to develop a rescale-invariant FRL algorithm. Simulation results demonstrate that the proposed algorithm notably enhances both convergence speed and convergent performance.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1089-7798
1558-2558
DOI:10.1109/LCOMM.2024.3486166