Joint Computation Offloading and Multiuser Scheduling Using Approximate Dynamic Programming in NB-IoT Edge Computing System

The Internet of Things (IoT) connects a huge number of resource-constraint IoT devices to the Internet, which generate massive amount of data that can be offloaded to the cloud for computation. As some of the applications may require very low latency, the emerging mobile edge computing (MEC) archite...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE internet of things journal Ročník 6; číslo 3; s. 5345 - 5362
Hlavní autoři: Lei, Lei, Xu, Huijuan, Xiong, Xiong, Zheng, Kan, Xiang, Wei
Médium: Journal Article
Jazyk:angličtina
Vydáno: Piscataway IEEE 01.06.2019
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Témata:
ISSN:2327-4662, 2327-4662
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:The Internet of Things (IoT) connects a huge number of resource-constraint IoT devices to the Internet, which generate massive amount of data that can be offloaded to the cloud for computation. As some of the applications may require very low latency, the emerging mobile edge computing (MEC) architecture offers cloud services by deploying MEC servers at the mobile base stations (BSs). The IoT devices can transmit the offloaded data to the BS for computation at the MEC server. Narrowband-IoT (NB-IoT) is a new cellular technology for the transmission of IoT data to the BS. In this paper, we propose a joint computation offloading and multiuser scheduling algorithm in NB-IoT edge computing system that minimizes the long-term average weighted sum of delay and power consumption under stochastic traffic arrival. We formulate the dynamic optimization problem into an infinite-horizon average-reward continuous-time Markov decision process (CTMDP) model. In order to deal with the curse-of-dimensionality problem, we use the approximate dynamic programming techniques, i.e., the linear value-function approximation and temporal-difference learning with post-decision state and semi-gradient descent method, to derive a simple algorithm for the solution of the CTMDP model. The proposed algorithm is semi-distributed, where the offloading algorithm is performed locally at the IoT devices, while the scheduling algorithm is auction-based where the IoT devices submit bids to the BS to make the scheduling decision centrally. Simulation results show that the proposed algorithm provides significant performance improvement over the two baseline algorithms and the MUMTO algorithm which is designed based on the deterministic task model.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2327-4662
2327-4662
DOI:10.1109/JIOT.2019.2900550