In-Space Computing for IoT Data Processing via Low Earth Orbit Satellites
Saved in:
| Title: | In-Space Computing for IoT Data Processing via Low Earth Orbit Satellites |
|---|---|
| Authors: | Shinde, Swapnil Sadashiv, Guruvayoorappan, Gayathri, De Cola, Tomaso, Tarchi, Daniele |
| Source: | 2025 IEEE International Conference on Communications Workshops (ICC Workshops). :1171-1176 |
| Publisher Information: | IEEE, 2025. |
| Publication Year: | 2025 |
| Subject Terms: | Space vehicles, Satellites, Low earth orbit satellites, Reinforcement learning, Data processing, Routing, Energy efficiency, Internet of Things, Optimization, Edge computing, Data Processing, Internet Of Things, Low Earth Orbit, Internet Of Things Data, Earth Orbit Satellites, Energy Efficiency, Energy Cost, Data Space, Distributed Computing, Markov Decision Process, Edge Computing, Constrained Optimization Problem, Node Selection, Internet Of Things Technology, Deep Q-network, Reinforcement Learning Agent, Computing Nodes, Reinforcement Learning Techniques, Dynamic Network, Average Energy, Internet Of Things Devices, Load Balancing, Satellite Networks,Task Offloading, Q-function, Selection Policy, Primary Network, Space Resources, 6G Networks, Buffer Space, Internet of Things, Edge Computing, On-board Processing, Non-Terrestrial Networks, Hierarchical Reinforcement Learning, Deep Q Networks |
| Description: | The Internet of Things (IoT) technology has become widespread in numerous domains, especially in remote regions where resources are scarce. The deployment of IoT technology has been facilitated by distributed edge computing (EC) systems spanning terrestrial to aerospace environments. This paper focuses on optimizing data routing, selection of computing nodes, and buffering policies for processing data in space by exploiting Low Earth Orbit (LEO) satellites. We introduce a constrained optimization problem to reduce the combined energy and latency costs associated with space data processing. This problem is structured as a hierarchical decision-making model using the Markov Decision Process (MDP) methodology and is addressed through Reinforcement Learning (RL) techniques. The proposed HRL framework employs two RL agents leveraging Deep Q Networks (DQN) to enhance data processing by optimizing node selection and improving buffering strategies for task execution. Simulation results show that the proposed framework outperforms existing methods in latency and energy efficiency. |
| Document Type: | Article Conference object |
| File Description: | application/pdf |
| DOI: | 10.1109/iccworkshops67674.2025.11162441 |
| Access URL: | https://hdl.handle.net/2158/1436133 https://doi.org/10.1109/ICCWorkshops67674.2025.11162441 |
| Rights: | STM Policy #29 |
| Accession Number: | edsair.doi.dedup.....40f4b8afd3530fe24740bc59aaa57757 |
| Database: | OpenAIRE |
| Abstract: | The Internet of Things (IoT) technology has become widespread in numerous domains, especially in remote regions where resources are scarce. The deployment of IoT technology has been facilitated by distributed edge computing (EC) systems spanning terrestrial to aerospace environments. This paper focuses on optimizing data routing, selection of computing nodes, and buffering policies for processing data in space by exploiting Low Earth Orbit (LEO) satellites. We introduce a constrained optimization problem to reduce the combined energy and latency costs associated with space data processing. This problem is structured as a hierarchical decision-making model using the Markov Decision Process (MDP) methodology and is addressed through Reinforcement Learning (RL) techniques. The proposed HRL framework employs two RL agents leveraging Deep Q Networks (DQN) to enhance data processing by optimizing node selection and improving buffering strategies for task execution. Simulation results show that the proposed framework outperforms existing methods in latency and energy efficiency. |
|---|---|
| DOI: | 10.1109/iccworkshops67674.2025.11162441 |
Nájsť tento článok vo Web of Science