Energy-Efficient Transmission Strategy for Delay Tolerable Services in NOMA-Based Downlink With Two Users

With the continuous development of the communication industry, there is a shift in real-time services from 4G networks to Delay Tolerable (DT) services in the context of 5G/B5G networks. Additionally, energy consumption control poses significant challenges in the current communication industry. Ther...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access Jg. 11; S. 113227 - 113243
Hauptverfasser: Bai, Mengmeng, Zhu, Rui, Guo, Jianxin, Wang, Feng, Wang, Liping, Zhu, Hangjie, Huang, Lei, Zhang, Yushuai
Format: Journal Article
Sprache:Englisch
Veröffentlicht: Piscataway IEEE 2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Schlagworte:
ISSN:2169-3536, 2169-3536
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:With the continuous development of the communication industry, there is a shift in real-time services from 4G networks to Delay Tolerable (DT) services in the context of 5G/B5G networks. Additionally, energy consumption control poses significant challenges in the current communication industry. Therefore, we study algorithms and schemes to improve the Energy Efficiency (EE) of DT services in the context of Non-Orthogonal Multiple Access (NOMA) downlink two-user communication system.First, we transformed the EE enhancement problem into a convex optimization problem based on transmission power by derivation. Secondly, we propose to use Approximate Statistical Dynamic Programming (ASDP) algorithm, Deep Deterministic Policy Gradient (DDPG), and Proximal Policy Optimization (PPO) to solve the problem that convex optimization cannot be decided in real time. Finally, we perform an interpretability analysis on whether the decision schemes of the agents trained by the DDPG algorithm and the PPO algorithm are reasonable. The simulation results show that the decisions made by the agent trained by the DDPG algorithm perform better compared to the ASDP algorithm and the PPO algorithm.
Bibliographie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2023.3323930