Shared-Resource Generative Adversarial Network (GAN) Training for 5G URLLC Deep Reinforcement Learning Augmentation

Deep Reinforcement Learning (DRL) solutions to 5G problems often face with communication unreliability issues due to imbalanced state-space distributions and the scarcity of rare samples. Generative Adversarial Network (GAN) is promising to improve DRL reliability. However, employing GANs in resourc...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE International Conference on Communications (2003) s. 2998 - 3003
Hlavní autoři: Mehdipourchari, Kaveh, Askarizadeh, Mohammad, Nguyen, Kim Khoa
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: IEEE 09.06.2024
Témata:
ISSN:1938-1883
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Deep Reinforcement Learning (DRL) solutions to 5G problems often face with communication unreliability issues due to imbalanced state-space distributions and the scarcity of rare samples. Generative Adversarial Network (GAN) is promising to improve DRL reliability. However, employing GANs in resource-constrained edge environments is very challenging due to their heavy resource consumption. Previous general resource allocation models for training neural networks do not consider GAN quality requirements such as the minimum number of training samples. We propose an architecture for sharing edge and cloud resources among multiple GANs, then formulate an optimization model, named OGAN, to maximize DRL reliability with respect to resource constraints for training GANs and fine-tuning DRLs. OGAN allocates resources for training several GANs and DRLs concurrently based on an upper bound error. Difference convex programming is then used to solve this mixed-integer non-linear model. Our experimental results show that OGAN improves the overall system reliability and performance by 23 % and 22 %, respectively, compared to baselines.
ISSN:1938-1883
DOI:10.1109/ICC51166.2024.10622305