Enhancing AI-Generated Content Efficiency Through Adaptive Multi-Edge Collaboration

The Artificial Intelligence-Generated Content (AIGC) technique has gained significant popularity in creating diverse content. However, the current deployment of AIGC services in a centralized framework leads to high response times. To address this issue, we propose the integration of collaborative M...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Proceedings of the International Conference on Distributed Computing Systems S. 960 - 970
Hauptverfasser: Xu, Changfu, Guo, Jianxiong, Zeng, Jiandian, Meng, Shengguang, Chu, Xiaowen, Cao, Jiannong, Wang, Tian
Format: Tagungsbericht
Sprache:Englisch
Veröffentlicht: IEEE 23.07.2024
Schlagworte:
ISSN:2575-8411
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The Artificial Intelligence-Generated Content (AIGC) technique has gained significant popularity in creating diverse content. However, the current deployment of AIGC services in a centralized framework leads to high response times. To address this issue, we propose the integration of collaborative Mobile Edge Computing (MEC) technology to decrease the processing delay of AIGC services. Nevertheless, existing collaborative MEC methods only facilitate collaborative processing among fixed Edge Servers (ESs), limiting flexibility and resource utilization across heterogeneous ESs for different computing and networking requirements associated with AIGC tasks. This poses challenges for efficient resource allocation. We present an adaptive multi-server collaborative MEC approach tailored for heterogeneous edge environments to achieve efficient AIGC by dynamically allocating task workload across multiple ESs. We formulate our problem as an online linear programming problem aiming to minimize task offloading make-span. This problem is proved to be NP-hard and we propose an online adaptive multi-server selection and allocation algorithm based on deep reinforcement learning that effectively addresses this problem. Additionally, we provide theoretical performance analysis, demonstrating that our algorithm achieves near-optimal solutions within approximate linear time complexity bounds. Finally, experimental results validate the effectiveness of our method by showcasing at least 11.04% reduction in task offloading make-span and a 44.86 % decrease in failure rate compared to state-of-the-art methods.
ISSN:2575-8411
DOI:10.1109/ICDCS60910.2024.00093