Differentially private stochastic gradient descent via compression and memorization

We propose a novel approach for achieving differential privacy for neural network training models through compression and memorization of gradients. The compression technique, which makes gradient vectors sparse, reduces the sensitivity so that differential privacy can be achieved with less noise; w...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of systems architecture Jg. 135; S. 102819
Hauptverfasser: Phong, Le Trieu, Phuong, Tran Thi
Format: Journal Article
Sprache:Englisch
Veröffentlicht: Elsevier B.V 01.02.2023
Schlagworte:
ISSN:1383-7621, 1873-6165
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:We propose a novel approach for achieving differential privacy for neural network training models through compression and memorization of gradients. The compression technique, which makes gradient vectors sparse, reduces the sensitivity so that differential privacy can be achieved with less noise; whereas the memorization technique, which remembers unused gradient parts, keeps track of the descent direction and thereby maintains the accuracy of the proposed algorithm. Our differentially private algorithm, called dp-memSGD for short, converges mathematically at the same rate of 1/T as standard stochastic gradient descent (SGD) algorithm, where T is the number of training iterations. Experimentally, we demonstrate that dp-memSGD converges with reasonable privacy losses on many benchmark datasets.
ISSN:1383-7621
1873-6165
DOI:10.1016/j.sysarc.2022.102819