Joint Adaptive Modulation Coding and Power Optimization in Heterogeneous Networks based on Constrained Deep Reinforcement Learning

In cognitive heterogeneous networks, multiple secondary transmitters (STs) co-exist with primary users (PUs) on the same frequency band channel through spectrum sensing. Due to inaccurate sensing of whether the channel is occupied, STs can cause interference to PUs, thereby affecting the transmissio...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on wireless communications p. 1
Main Authors: Wang, Tao, Gu, Xin, Chen, Haipeng, Chang, Cheng, Tang, Tiantian, Huang, Hao, Jiao, Donglai, Lin, Yun, Gui, Guan
Format: Journal Article
Language:English
Published: IEEE 2025
Subjects:
ISSN:1536-1276, 1558-2248
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In cognitive heterogeneous networks, multiple secondary transmitters (STs) co-exist with primary users (PUs) on the same frequency band channel through spectrum sensing. Due to inaccurate sensing of whether the channel is occupied, STs can cause interference to PUs, thereby affecting the transmission performance of PUs. This paper proposes a constrained deep reinforcement learning-based joint adaptive modulation coding and power selection (CDRL-JAMCPS) algorithm. The proposed CDRL-JAMCPS learns the interference patterns of STs to PUs through interaction with the environment and selects the modulation coding scheme and transmit power for future frames of PUs based on the learned patterns, aiming to maximize the transmission rate while reducing energy consumption. Furthermore, addressing the issue where existing optimization algorithms solely consider network transmission rates while neglecting data transmission quality, this paper proposes a reward function in Lagrangian form based on frame error rate (FER) constraints. By optimizing this reward function in its dual domain, the problem of poor data transmission quality is resolved. The simulation results demonstrate that the proposed algorithm achieves better transmission performance compared to other reinforcement learning algorithms in environments where signal interference is difficult to perceive. Meanwhile, compared to algorithms that do not consider transmission quality, our algorithm exhibits significant advantages in meeting FER requirements and improving data transmission quality.
ISSN:1536-1276
1558-2248
DOI:10.1109/TWC.2025.3609334