Shared-Resource Generative Adversarial Network (GAN) Training for 5G URLLC Deep Reinforcement Learning Augmentation

Deep Reinforcement Learning (DRL) solutions to 5G problems often face with communication unreliability issues due to imbalanced state-space distributions and the scarcity of rare samples. Generative Adversarial Network (GAN) is promising to improve DRL reliability. However, employing GANs in resourc...

Full description

Saved in:
Bibliographic Details
Published in:IEEE International Conference on Communications (2003) pp. 2998 - 3003
Main Authors: Mehdipourchari, Kaveh, Askarizadeh, Mohammad, Nguyen, Kim Khoa
Format: Conference Proceeding
Language:English
Published: IEEE 09.06.2024
Subjects:
ISSN:1938-1883
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Deep Reinforcement Learning (DRL) solutions to 5G problems often face with communication unreliability issues due to imbalanced state-space distributions and the scarcity of rare samples. Generative Adversarial Network (GAN) is promising to improve DRL reliability. However, employing GANs in resource-constrained edge environments is very challenging due to their heavy resource consumption. Previous general resource allocation models for training neural networks do not consider GAN quality requirements such as the minimum number of training samples. We propose an architecture for sharing edge and cloud resources among multiple GANs, then formulate an optimization model, named OGAN, to maximize DRL reliability with respect to resource constraints for training GANs and fine-tuning DRLs. OGAN allocates resources for training several GANs and DRLs concurrently based on an upper bound error. Difference convex programming is then used to solve this mixed-integer non-linear model. Our experimental results show that OGAN improves the overall system reliability and performance by 23 % and 22 %, respectively, compared to baselines.
ISSN:1938-1883
DOI:10.1109/ICC51166.2024.10622305