Deep Randomized Distributed Function Computation (DeepRDFC): Neural Distributed Channel Simulation

The randomized distributed function computation (RDFC) framework, which unifies many cutting-edge distributed computation and learning applications, is considered. An autoencoder (AE) architecture is proposed to minimize the total variation distance between the probability distribution simulated by...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Proceedings / IEEE International Symposium on Information Theory s. 1 - 6
Hlavní autoři: Bergstrom, Didrik, Gunlu, Onur
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: IEEE 22.06.2025
Témata:
ISSN:2157-8117
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:The randomized distributed function computation (RDFC) framework, which unifies many cutting-edge distributed computation and learning applications, is considered. An autoencoder (AE) architecture is proposed to minimize the total variation distance between the probability distribution simulated by the AE outputs and an unknown target distribution, using only data samples. We illustrate significantly high RDFC performance with communication load gains from our AEs compared to data compression methods. Our designs establish deep learning-based RDFC methods and aim to facilitate the use of RDFC methods, especially when the amount of common randomness is limited and strong function computation guarantees are required.
ISSN:2157-8117
DOI:10.1109/ISIT63088.2025.11195680