Local List-Decoding and Testing of Random Linear Codes from High Error

In this paper, we give efficient algorithms for list-decoding and testing random linear codes. Our main result is that random sparse linear codes are locally list-decodable and locally testable in the high-error regime with only a constant number of queries. More precisely, we show that for all cons...

Full description

Saved in:
Bibliographic Details
Published in:SIAM journal on computing Vol. 42; no. 3; pp. 1302 - 1326
Main Authors: Kopparty, Swastik, Saraf, Shubhangi
Format: Journal Article
Language:English
Published: Philadelphia Society for Industrial and Applied Mathematics 01.01.2013
Subjects:
ISSN:0097-5397, 1095-7111
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In this paper, we give efficient algorithms for list-decoding and testing random linear codes. Our main result is that random sparse linear codes are locally list-decodable and locally testable in the high-error regime with only a constant number of queries. More precisely, we show that for all constants $c> 0$ and $\gamma > 0$, and for every linear code $\mathcal C \subseteq \{0,1\}^N$ which is (1) sparse: $|\mathcal C| \leq N^c$, and (2) unbiased: each nonzero codeword in $\mathcal C$ has weight $\in (\frac{1}{2} - N^{-\gamma}, \frac{1}{2} + N^{-\gamma})$, then $\mathcal C$ is locally testable and locally list-decodable from $(\frac{1}{2} - \epsilon)$-fraction worst-case errors using only ${ {poly}}(\frac{1}{\epsilon})$ queries to a received word. We also give subexponential time algorithms for list-decoding arbitrary unbiased (but not necessarily sparse) linear codes in the high-error regime. In particular, this yields the first subexponential time algorithm even for the problem of (unique) decoding random linear codes of inverse-polynomial rate from a fixed positive fraction of errors. Earlier, Kaufman and Sudan showed that sparse, unbiased codes can be locally (unique) decoded and locally tested from a constant fraction of errors, where this constant fraction tends to 0 as the number of codewords grows. Our results strengthen their results, while also having simpler proofs. At the heart of our algorithms is a natural "self-correcting" operation defined on codes and received words. This self-correcting operation transforms a code $\mathcal C$ with a received word $w$ into a simpler code $\mathcal C'$ and a related received word $w'$ such that $w$ is close to $\mathcal C$ if and only if $w'$ is close to $\mathcal C'$. Starting with a sparse, unbiased code $\mathcal C$ and an arbitrary received word $w$, a constant number of applications of the self-correcting operation reduces us to the case of local list-decoding and testing for the Hadamard code, for which the well-known algorithms of Goldreich and Levin and Blum, Luby, and Rubinfeld are available. This yields the constant-query local algorithms for the original code $\mathcal C$. Our algorithm for decoding unbiased linear codes in subexponential time proceeds similarly. Applying the self-correcting operation to an unbiased code $\mathcal C$ and an arbitrary received word a superconstant number of times, we get reduced to the problem of learning noisy parities, for which nontrivial subexponential time algorithms were recently given by Blum, Kalai, and Wasserman and Feldman et al. Our result generalizes a result of Lyubashevsky, which gave a subexponential time algorithm for decoding random linear codes of inverse-polynomial rate from random errors. [PUBLICATION ABSTRACT]
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ObjectType-Article-2
ObjectType-Feature-1
content type line 23
ISSN:0097-5397
1095-7111
DOI:10.1137/100811945