Advanced Handover Optimization (AHO) using deep reinforcement learning in 5G Networks
Gespeichert in:
| Titel: | Advanced Handover Optimization (AHO) using deep reinforcement learning in 5G Networks |
|---|---|
| Autoren: | T. Senthil Kumar, Mardeni Roslee, J. Jayapradha, Yasir Ullah, Chilakala Sudhamani, Sufian Mousa Ibrahim Mitani, Anwar Faizd Osman, Fatimah Zaharah Ali |
| Quelle: | Journal of King Saud University: Computer and Information Sciences, Vol 37, Iss 6, Pp 1-14 (2025) |
| Verlagsinformationen: | Springer, 2025. |
| Publikationsjahr: | 2025 |
| Bestand: | LCC:Electronic computers. Computer science |
| Schlagwörter: | Machine learning (ML), 5G cellular networks, Handover optimization (HO), Deep reinforcement learning, Model performance, Electronic computers. Computer science, QA75.5-76.95 |
| Beschreibung: | Abstract Handover (HO) management in 5G networks is an essential and sensitive mechanism as the deployment of 5G networks is undergoing rapid changes. We propose Adaptive Handover Optimization (AHO) model that uses Deep Reinforcement Learning (DRL) to dynamically adapt those key Handover Control Parameters (HCPs) to increase the handover completion rate and the request service rate via finetuning the Handover Margin (HOM), Time to Trigger (TTT). In this model, out of three paramount parameters, namely Handover Probability (HOP), Outage Probability (OP) and Ping Pong Handover Probability (PPHP), the main issue is to minimize signal degradation and service interruption via simultaneous optimization of these three parameters. These metrics are performance indicators when the HO process is being evaluated. Thus, a Deep Deterministic Policy Gradient (DDPG) based Actor Critic framework with four Deep Neural Networks (DNNs) is proposed to train four DNNs to intelligently adjust HOM values based on the real time Radio Signal Strength (RSRP) and Signal to Interference plus Noise Ratio (SINR) conditions. The random mobility of User Equipment (UE) in the distributed base station coverage zone is modelled by the system environment, which subsequently derives HO decisions. Simulation proves that the model achieves a significant reduction of unnecessary handovers and outage probabilities, leading to improve the stability of the network. The proposed model contributes a scalable, learning based optimization strategy applicable to systems of future generation wireless communication. |
| Publikationsart: | article |
| Dateibeschreibung: | electronic resource |
| Sprache: | English |
| ISSN: | 1319-1578 2213-1248 |
| Relation: | https://doaj.org/toc/1319-1578; https://doaj.org/toc/2213-1248 |
| DOI: | 10.1007/s44443-025-00124-0 |
| Zugangs-URL: | https://doaj.org/article/d690457e5ec249be8292b252d2e7cc7a |
| Dokumentencode: | edsdoj.690457e5ec249be8292b252d2e7cc7a |
| Datenbank: | Directory of Open Access Journals |
| Abstract: | Abstract Handover (HO) management in 5G networks is an essential and sensitive mechanism as the deployment of 5G networks is undergoing rapid changes. We propose Adaptive Handover Optimization (AHO) model that uses Deep Reinforcement Learning (DRL) to dynamically adapt those key Handover Control Parameters (HCPs) to increase the handover completion rate and the request service rate via finetuning the Handover Margin (HOM), Time to Trigger (TTT). In this model, out of three paramount parameters, namely Handover Probability (HOP), Outage Probability (OP) and Ping Pong Handover Probability (PPHP), the main issue is to minimize signal degradation and service interruption via simultaneous optimization of these three parameters. These metrics are performance indicators when the HO process is being evaluated. Thus, a Deep Deterministic Policy Gradient (DDPG) based Actor Critic framework with four Deep Neural Networks (DNNs) is proposed to train four DNNs to intelligently adjust HOM values based on the real time Radio Signal Strength (RSRP) and Signal to Interference plus Noise Ratio (SINR) conditions. The random mobility of User Equipment (UE) in the distributed base station coverage zone is modelled by the system environment, which subsequently derives HO decisions. Simulation proves that the model achieves a significant reduction of unnecessary handovers and outage probabilities, leading to improve the stability of the network. The proposed model contributes a scalable, learning based optimization strategy applicable to systems of future generation wireless communication. |
|---|---|
| ISSN: | 13191578 22131248 |
| DOI: | 10.1007/s44443-025-00124-0 |
Full Text Finder
Nájsť tento článok vo Web of Science