Max-Linear Regression by Convex Programming

We consider the multivariate max-linear regression problem where the model parameters <inline-formula> <tex-math notation="LaTeX"> {\beta }_{1},\dotsc, {\beta }_{k}\in \mathbb {R}^{p} </tex-math></inline-formula> need to be estimated from <inline-formula> <...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:IEEE transactions on information theory Ročník 70; číslo 3; s. 1897 - 1912
Hlavní autori: Kim, Seonho, Bahmani, Sohail, Lee, Kiryung
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: New York IEEE 01.03.2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Predmet:
ISSN:0018-9448, 1557-9654
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:We consider the multivariate max-linear regression problem where the model parameters <inline-formula> <tex-math notation="LaTeX"> {\beta }_{1},\dotsc, {\beta }_{k}\in \mathbb {R}^{p} </tex-math></inline-formula> need to be estimated from <inline-formula> <tex-math notation="LaTeX">n </tex-math></inline-formula> independent samples of the (noisy) observations <inline-formula> <tex-math notation="LaTeX">y = \max _{1\leq j \leq k} {\beta }_{j}^{\mathsf {T}} {x} + \mathrm {noise} </tex-math></inline-formula>. The max-linear model vastly generalizes the conventional linear model, and it can approximate any convex function to an arbitrary accuracy when the number of linear models <inline-formula> <tex-math notation="LaTeX">k </tex-math></inline-formula> is large enough. However, the inherent nonlinearity of the max-linear model renders the estimation of the regression parameters computationally challenging. Particularly, no estimator based on convex programming is known in the literature. We formulate and analyze a scalable convex program given by anchored regression (AR) as the estimator for the max-linear regression problem. Under the standard Gaussian observation setting, we present a non-asymptotic performance guarantee showing that the convex program recovers the parameters with high probability. When the <inline-formula> <tex-math notation="LaTeX">k </tex-math></inline-formula> linear components are equally likely to achieve the maximum, our result shows a sufficient number of noise-free observations for exact recovery scales as <inline-formula> <tex-math notation="LaTeX">k^{4}p </tex-math></inline-formula> up to a logarithmic factor. This sample complexity coincides with that by alternating minimization (Ghosh et al., 2021). Moreover, the same sample complexity applies when the observations are corrupted with arbitrary deterministic noise. We provide empirical results that show that our method performs as our theoretical result predicts, and is competitive with the alternating minimization algorithm particularly in presence of multiplicative Bernoulli noise. Furthermore, we also show empirically that a recursive application of AR can significantly improve the estimation accuracy.
Bibliografia:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0018-9448
1557-9654
DOI:10.1109/TIT.2024.3350518