A reinforcement learning based mobile charging sequence scheduling algorithm for optimal sensing coverage in wireless rechargeable sensor networks

Mobile charging provides a new way for energy replenishment in the Wireless Rechargeable Sensor Network (WRSN), where the Mobile Charger (MC) is employed for charging nodes sequentially via wireless energy transfer according to the mobile charging sequence scheduling result. Mobile Charging Sequence...

Full description

Saved in:
Bibliographic Details
Published in:Journal of ambient intelligence and humanized computing Vol. 15; no. 6; pp. 2869 - 2881
Main Authors: Li, Jinglin, Wang, Haoran, Xiao, Wendong
Format: Journal Article
Language:English
Published: Berlin/Heidelberg Springer Berlin Heidelberg 01.06.2024
Springer Nature B.V
Subjects:
ISSN:1868-5137, 1868-5145
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Mobile charging provides a new way for energy replenishment in the Wireless Rechargeable Sensor Network (WRSN), where the Mobile Charger (MC) is employed for charging nodes sequentially via wireless energy transfer according to the mobile charging sequence scheduling result. Mobile Charging Sequence Scheduling for Optimal Sensing Coverage (MCSS-OSC) is a critical problem for providing network application performance; it aims to maximize the Quality of Sensing Coverage (QSC) of the network by optimizing the MC’s mobile charging sequence and remains a challenging problem due to its NP-completeness in nature. In this paper, we propose a novel Improved Q-learning Algorithm (IQA) for MCSS-OSC, where MC is taken as an agent to continuously learn the space of mobile charging strategies through approximate estimation and improve the charging strategy by interacting with the network environment. A novel reward function is designed according to the network sensing coverage contribution to evaluate the MC charging action at each charging time step. In addition, an efficient exploration strategy is also designed by introducing an optimal experience-strengthening mechanism to record the current optimal mobile charging sequence regularly. Extensive simulation results via Matlab2021 software show that IQA is superior to existing heuristic algorithms in network QSC, especially for large-scale networks. This paper provides an efficient solution for WRSN energy management and new ideas for performance optimization of reinforcement learning algorithms.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1868-5137
1868-5145
DOI:10.1007/s12652-024-04781-3