RIS-Enabled UAV Swarm Optimization Framework for Energy Harvesting and Data Collection in Post-Disaster Recovery Management

Unmanned aerial vehicles (UAVs) are proven useful for enabling wireless power transfer (WPT), resource offloading, and data collection from ground IoT devices in post-disaster scenarios where conventional communication infrastructure is compromised. As 6G networks emerge, offering ultra-reliable low...

Full description

Saved in:
Bibliographic Details
Published in:IEEE International Conference on Communications (2003) pp. 1286 - 1291
Main Authors: Dhuheir, Marwan, Hamdaoui, Bechir, Erbad, Aiman, Al-Fuqaha, Ala, Abdallah, Mohamed, Guizani, Mohsen
Format: Conference Proceeding
Language:English
Published: IEEE 08.06.2025
Subjects:
ISSN:1938-1883
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Unmanned aerial vehicles (UAVs) are proven useful for enabling wireless power transfer (WPT), resource offloading, and data collection from ground IoT devices in post-disaster scenarios where conventional communication infrastructure is compromised. As 6G networks emerge, offering ultra-reliable low-latency communication and enhanced energy efficiency, UAVs are poised to play a critical role in extending 6G features to challenging environments. The key challenges in this context include limited UAV flight duration, energy constraints, limited resources, and the reliability of data collection, all of which impact the effectiveness of UAV operations. Motivated by the need for efficient resource allocation and reliable data collection, we propose a solution using UAV swarms combined with reconfigurable intelligent surfaces (RIS) to optimize energy harvesting for IoT devices and enhance communication quality. We formulate the problem of resource optimization, UAVs-RIS trajectory planning, and RIS configuration as a mixed integer nonlinear programming optimization problem and solve it in a dynamic condition by transforming it into a Markov decision process and utilizing a deep reinforcement learning (DRL) approach based on proximal policy optimization (PPO) algorithm to solve it. Simulation results demonstrate that our framework outperforms traditional approaches, including the Actor-Critic (AC) algorithm and a greedy solution, achieving superior performance in energy harvesting efficiency, data collection, and communication reliability.
ISSN:1938-1883
DOI:10.1109/ICC52391.2025.11161898