MMDFL: Multi-Model-based Decentralized Federated Learning for Resource-Constrained AIoT Systems

Along with the prosperity of Artificial Intelligence (AI) techniques, more and more Artificial Intelligence of Things (AIoT) applications adopt Federated Learning (FL) to enable collaborative learning without compromising the privacy of devices. Since existing centralized FL methods suffer from the...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:2025 62nd ACM/IEEE Design Automation Conference (DAC) s. 1 - 7
Hlavní autoři: Yan, Dengke, Yang, Yanxin, Hu, Ming, Fu, Xin, Chen, Mingsong
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: IEEE 22.06.2025
Témata:
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Along with the prosperity of Artificial Intelligence (AI) techniques, more and more Artificial Intelligence of Things (AIoT) applications adopt Federated Learning (FL) to enable collaborative learning without compromising the privacy of devices. Since existing centralized FL methods suffer from the problems of single-point-offailure and communication bottleneck caused by the parameter server, we are witnessing an increasing use of Decentralized Federated Learning (DFL), which is based on Peer-to-Peer (P2P) communication without using a global model. However, DFL still faces three major challenges, i.e., limited computing power and network bandwidth of resource-constrained devices, non-Independent and Identically Distributed (non-IID) device data, and all-neighbor-dependent knowledge aggregation operations, all of which greatly suppress the learning potential of existing DFL methods. To address these problems, this paper presents an efficient DFL framework named MMDFL based on our proposed multi-model-based learning and knowledge aggregation mechanism. Specifically, MMDFL adopts multiple traveler models, which perform local training individually along their traversed devices, accelerating and maximizing knowledge learning and sharing among devices. Moreover, based on our proposed device selection strategy, MMDFL enables each traveler to adaptively explore its next best neighboring device to further enhance the DFL training performance, taking into account issues of data heterogeneity, limited resources and catastrophic forgetting phenomenon. Experimental results from simulation and a real testbed show that, compared with state-of-the-art DFL methods, MMDFL can not only significantly reduce the communication overhead but also achieve better overall classification performance for both IID and non-IID scenarios.
DOI:10.1109/DAC63849.2025.11133116