Tuning DNN Model Compression to Resource and Data Availability in Cooperative Training

Model compression is a fundamental tool to execute machine learning (ML) tasks on the diverse set of devices populating current-and next-generation networks, thereby exploiting their resources and data. At the same time, how much and when to compress ML models are very complex decisions, as they hav...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE/ACM transactions on networking Jg. 32; H. 2; S. 1 - 16
Hauptverfasser: Malandrino, Francesco, di Giacomo, Giuseppe, Karamzade, Armin, Levorato, Marco, Chiasserini, Carla Fabiana
Format: Journal Article
Sprache:Englisch
Veröffentlicht: New York IEEE 01.04.2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Schlagworte:
ISSN:1063-6692, 1558-2566
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Model compression is a fundamental tool to execute machine learning (ML) tasks on the diverse set of devices populating current-and next-generation networks, thereby exploiting their resources and data. At the same time, how much and when to compress ML models are very complex decisions, as they have to jointly account for such aspects as the model being used, the resources (e.g., computational) and local datasets available at each node, as well as network latencies. In this work, we address the multi-dimensional problem of adapting the model compression, data selection, and node allocation decisions to each other: our objective is to perform the DNN training at the minimum energy cost, subject to learning quality and time constraints. To this end, we propose an algorithmic framework called PACT, combining a time-expanded graph representation of the training process, a dynamic programming solution strategy, and a data-driven approach to the estimation of the loss evolution. We prove that PACT's complexity is polynomial, and its decisions can get arbitrarily close to the optimum. Through our numerical evaluation, we further show how PACT can consistently outperform state-of-the-art alternatives and closely matches the optimal energy consumption.
Bibliographie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1063-6692
1558-2566
DOI:10.1109/TNET.2023.3323023