Fundamental limits of distributed tracking
Consider the following communication scenario. An n-dlmensional source with memory is observed by K isolated encoders via parallel channels, who causally compress their observations to transmit to the decoder via noiseless rate-constrained links. At each time instant, the decoder receives K new code...
Gespeichert in:
| Veröffentlicht in: | Proceedings / IEEE International Symposium on Information Theory S. 2438 - 2443 |
|---|---|
| Hauptverfasser: | , |
| Format: | Tagungsbericht |
| Sprache: | Englisch |
| Veröffentlicht: |
IEEE
01.06.2020
|
| Schlagworte: | |
| ISSN: | 2157-8117 |
| Online-Zugang: | Volltext |
| Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
| Zusammenfassung: | Consider the following communication scenario. An n-dlmensional source with memory is observed by K isolated encoders via parallel channels, who causally compress their observations to transmit to the decoder via noiseless rate-constrained links. At each time instant, the decoder receives K new codewords from the observers, combines them with the past received codewords, and produces a minimum- distortion estimate of the latest block of n source symbols. This scenario extends the classical one-shot CEO problem to multiple rounds of communication with communicators maintaining memory of the past.We prove a coding theorem showing that the minimum asymptotically (as n → ∞) achievable sum rate required to achieve a target distortion is equal to the directed mutual information from the observers to the decoder minimized subject to the distortion constraint and the separate encoding constraint. For the Gauss-Markov source observed via K parallel AWGN channels, we solve that minimal directed mutual information problem, thereby establishing the minimum asymptotically achievable sum rate. Finally, we explicitly bound the rate loss due to a lack of communication among the observers; that bound is attained with equality in the case of identical observation channels.The general coding theorem is proved via a new nonasymptotic bound that uses stochastic likelihood coders and whose asymptotic analysis yields an extension of the Berger-Tung inner bound to the causal setting. The analysis of the Gaussian case is facilitated by reversing the channels of the observers. |
|---|---|
| ISSN: | 2157-8117 |
| DOI: | 10.1109/ISIT44484.2020.9174006 |