Multi-Source AoI-Constrained Resource Minimization under HARQ: Heterogeneous Sampling Processes

We consider a multi-source hybrid automatic repeat request (HARQ) based system, where a transmitter sends status update packets of random arrival (i.e., uncontrollable sampling) and generate-at-will (i.e., controllable sampling) sources to a destination through an error-prone channel. We develop tra...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on vehicular technology Vol. 73; no. 1; pp. 1 - 15
Main Authors: Vilni, Saeid Sadeghi, Moltafet, Mohammad, Leinonen, Markus, Codreanu, Marian
Format: Journal Article
Language:English
Published: New York IEEE 01.01.2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects:
ISSN:0018-9545, 1939-9359, 1939-9359
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:We consider a multi-source hybrid automatic repeat request (HARQ) based system, where a transmitter sends status update packets of random arrival (i.e., uncontrollable sampling) and generate-at-will (i.e., controllable sampling) sources to a destination through an error-prone channel. We develop transmission scheduling policies to minimize the average number of transmissions subject to an average age of information (AoI) constraint. First, we consider known environment (i.e., known system statistics) and develop a near-optimal deterministic transmission policy and a low-complexity dynamic transmission (LC-DT) policy. The former policy is derived by casting the main problem into a constrained Markov decision process (CMDP) problem, which is then solved using the Lagrangian relaxation, relative value iteration algorithm, and bisection. The LC-DT policy is developed via the drift-plus-penalty (DPP) method by transforming the main problem into a sequence of per-slot problems. Finally, we consider unknown environment and devise a learning-based transmission policy by relaxing the CMDP problem into an MDP problem using the DPP method and then adopting the deep Q-learning algorithm. Numerical results show that the proposed policies achieve near-optimal performance and illustrate the benefits of HARQ in status updating.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0018-9545
1939-9359
1939-9359
DOI:10.1109/TVT.2023.3310190