Efficient randomized tensor-based algorithms for function approximation and low-rank kernel interactions

In this paper, we introduce a method for multivariate function approximation using function evaluations, Chebyshev polynomials, and tensor-based compression techniques via the Tucker format. We develop novel randomized techniques to accomplish the tensor compression, provide a detailed analysis of t...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Advances in computational mathematics Ročník 48; číslo 5
Hlavní autoři: Saibaba, Arvind K., Minster, Rachel, Kilmer, Misha E.
Médium: Journal Article
Jazyk:angličtina
Vydáno: New York Springer US 01.10.2022
Springer Nature B.V
Témata:
ISSN:1019-7168, 1572-9044
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:In this paper, we introduce a method for multivariate function approximation using function evaluations, Chebyshev polynomials, and tensor-based compression techniques via the Tucker format. We develop novel randomized techniques to accomplish the tensor compression, provide a detailed analysis of the computational costs, provide insight into the error of the resulting approximations, and discuss the benefits of the proposed approaches. We also apply the tensor-based function approximation to develop low-rank matrix approximations to kernel matrices that describe pairwise interactions between two sets of points; the resulting low-rank approximations are efficient to compute and store (the complexity is linear in the number of points). We present an adaptive version of the function and kernel approximation that determines an approximation that satisfies a user-specified relative error over a set of random points. We extend our approach to the case where the kernel requires repeated evaluations for many values of (hyper)parameters that govern the kernel. We give detailed numerical experiments on example problems involving multivariate function approximation, low-rank matrix approximations of kernel matrices involving well-separated clusters of sources and target points, and a global low-rank approximation of kernel matrices with an application to Gaussian processes. We observe speedups up to 18X over standard matrix-based approaches.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1019-7168
1572-9044
DOI:10.1007/s10444-022-09979-7