Shared-Resource Generative Adversarial Network (GAN) Training for 5G URLLC Deep Reinforcement Learning Augmentation

Deep Reinforcement Learning (DRL) solutions to 5G problems often face with communication unreliability issues due to imbalanced state-space distributions and the scarcity of rare samples. Generative Adversarial Network (GAN) is promising to improve DRL reliability. However, employing GANs in resourc...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:IEEE International Conference on Communications (2003) s. 2998 - 3003
Hlavní autori: Mehdipourchari, Kaveh, Askarizadeh, Mohammad, Nguyen, Kim Khoa
Médium: Konferenčný príspevok..
Jazyk:English
Vydavateľské údaje: IEEE 09.06.2024
Predmet:
ISSN:1938-1883
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:Deep Reinforcement Learning (DRL) solutions to 5G problems often face with communication unreliability issues due to imbalanced state-space distributions and the scarcity of rare samples. Generative Adversarial Network (GAN) is promising to improve DRL reliability. However, employing GANs in resource-constrained edge environments is very challenging due to their heavy resource consumption. Previous general resource allocation models for training neural networks do not consider GAN quality requirements such as the minimum number of training samples. We propose an architecture for sharing edge and cloud resources among multiple GANs, then formulate an optimization model, named OGAN, to maximize DRL reliability with respect to resource constraints for training GANs and fine-tuning DRLs. OGAN allocates resources for training several GANs and DRLs concurrently based on an upper bound error. Difference convex programming is then used to solve this mixed-integer non-linear model. Our experimental results show that OGAN improves the overall system reliability and performance by 23 % and 22 %, respectively, compared to baselines.
ISSN:1938-1883
DOI:10.1109/ICC51166.2024.10622305