MMDFL: Multi-Model-based Decentralized Federated Learning for Resource-Constrained AIoT Systems

Along with the prosperity of Artificial Intelligence (AI) techniques, more and more Artificial Intelligence of Things (AIoT) applications adopt Federated Learning (FL) to enable collaborative learning without compromising the privacy of devices. Since existing centralized FL methods suffer from the...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:2025 62nd ACM/IEEE Design Automation Conference (DAC) S. 1 - 7
Hauptverfasser: Yan, Dengke, Yang, Yanxin, Hu, Ming, Fu, Xin, Chen, Mingsong
Format: Tagungsbericht
Sprache:Englisch
Veröffentlicht: IEEE 22.06.2025
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Along with the prosperity of Artificial Intelligence (AI) techniques, more and more Artificial Intelligence of Things (AIoT) applications adopt Federated Learning (FL) to enable collaborative learning without compromising the privacy of devices. Since existing centralized FL methods suffer from the problems of single-point-offailure and communication bottleneck caused by the parameter server, we are witnessing an increasing use of Decentralized Federated Learning (DFL), which is based on Peer-to-Peer (P2P) communication without using a global model. However, DFL still faces three major challenges, i.e., limited computing power and network bandwidth of resource-constrained devices, non-Independent and Identically Distributed (non-IID) device data, and all-neighbor-dependent knowledge aggregation operations, all of which greatly suppress the learning potential of existing DFL methods. To address these problems, this paper presents an efficient DFL framework named MMDFL based on our proposed multi-model-based learning and knowledge aggregation mechanism. Specifically, MMDFL adopts multiple traveler models, which perform local training individually along their traversed devices, accelerating and maximizing knowledge learning and sharing among devices. Moreover, based on our proposed device selection strategy, MMDFL enables each traveler to adaptively explore its next best neighboring device to further enhance the DFL training performance, taking into account issues of data heterogeneity, limited resources and catastrophic forgetting phenomenon. Experimental results from simulation and a real testbed show that, compared with state-of-the-art DFL methods, MMDFL can not only significantly reduce the communication overhead but also achieve better overall classification performance for both IID and non-IID scenarios.
DOI:10.1109/DAC63849.2025.11133116