Personalized Heterogeneity-aware Federated Search Towards Better Accuracy and Energy Efficiency

Federated learning (FL), a new distributed technology, allows us to train the global model on the edge and embedded devices without local data sharing. However, due to the wide distribution of different types of devices, FL faces severe heterogeneity issues. The accuracy and efficiency of FL deploym...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:2022 IEEE/ACM International Conference On Computer Aided Design (ICCAD) s. 1 - 9
Hlavní autori: Yang, Zhao, Sun, Qingshuang
Médium: Konferenčný príspevok..
Jazyk:English
Vydavateľské údaje: ACM 29.10.2022
Predmet:
ISSN:1558-2434
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:Federated learning (FL), a new distributed technology, allows us to train the global model on the edge and embedded devices without local data sharing. However, due to the wide distribution of different types of devices, FL faces severe heterogeneity issues. The accuracy and efficiency of FL deployment at the edge are severely impacted by heterogeneous data and heterogeneous systems. In this paper, we perform joint FL model personalization for heterogeneous systems and heterogeneous data to address the challenges posed by heterogeneities. We begin by using model inference efficiency as a starting point to personalize network scale on each node. Furthermore, it can be used to guide the efficient FL training process, which can help to ease the problem of straggler devices and improve FL's energy efficiency. During FL training, federated search is then used to acquire highly accurate personalized network structures. By taking into account the unique characteristics of FL deployment at edge devices, the personalized network structures obtained by our federated search framework with a lightweight search controller can achieve competitive accuracy with state-of-the-art (SOTA) methods, while reducing inference and training energy consumption by up to 3.57× and 1.82×, respectively.
ISSN:1558-2434
DOI:10.1145/3508352.3549403