MGCO: Mobility-Aware Generative Computation Offloading in Edge-Cloud Systems
Mobility introduces significant challenges for optimal computation offloading, latency minimization, and efficient re source utilization in multi-access edge computing (MEC) systems. A key difficulty lies in leveraging real user trajectories to jointly optimize horizontal (inter-edge) and vertical (...
Saved in:
| Published in: | IEEE transactions on services computing pp. 1 - 16 |
|---|---|
| Main Authors: | , , , , |
| Format: | Journal Article |
| Language: | English |
| Published: |
IEEE
2025
|
| Subjects: | |
| ISSN: | 1939-1374, 2372-0204 |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Mobility introduces significant challenges for optimal computation offloading, latency minimization, and efficient re source utilization in multi-access edge computing (MEC) systems. A key difficulty lies in leveraging real user trajectories to jointly optimize horizontal (inter-edge) and vertical (edge-to-cloud) task offloading decisions. This paper proposes a two-dimensional offloading scheme for a multi-layer edge-cloud architecture that enables collaborative task execution among resource-constrained edge nodes under mobility conditions. We present MGCO (Mobility-Aware Generative Computation Offloading), a generative AI-driven Transformer-based sequence-to-sequence Deep Q-Network (s2s-DQN) framework that learns from real-time trajectory data to anticipate user movement and optimize task placement dynamically. The Transformer architecture is adopted because its multi-head self-attention effectively captures long range dependencies in mobility and task-demand patterns while avoiding vanishing gradients and sequential bottlenecks inherent to LSTM/GRU models. This design enables parallel contextual reasoning and stable autoregressive action generation, supporting real-time offloading decisions within strict operational latency constraints. Experimental results demonstrate that MGCO consistently outperforms existing methods, achieving up to 41.61% reduction in turnaround time compared to GASTO, and substantial improvements over DMQTO and HMAOA, reaching up to 645.40% and 751.90%, respectively, for longer prediction horizons (48 time slots of 5 seconds each). These results highlight MGCO's robustness, scalability, and effectiveness in managing complex mobility scenarios in dynamic edge-cloud environments. |
|---|---|
| ISSN: | 1939-1374 2372-0204 |
| DOI: | 10.1109/TSC.2025.3632862 |