Constructive Approximation manuscript No. (will be inserted by the editor) Power Series Kernels

Saved in:
Bibliographic Details
Title: Constructive Approximation manuscript No. (will be inserted by the editor) Power Series Kernels
Authors: Barbara Zwicknagl
Contributors: The Pennsylvania State University CiteSeerX Archives
Source: http://www.mis.mpg.de/preprints/2006/preprint2006_155.pdf.
Publication Year: 2006
Collection: CiteSeerX
Subject Terms: multivariate polynomial approximation, Bernstein theorem, dot prod- uct kernels, reproducing kernel Hilbert spaces, error bounds, convergence orders
Description: We introduce a class of analytic positive definite multivariate kernels which includes infinite dot product kernels as sometimes used in machine learning, certain new nonlinearly factorizable kernels and a kernel which is closely related to the Gaussian. Each such kernel reproduces in a certain ‘native ‘ Hilbert space of multivariate analytic functions. If functions from this space are interpolated in scattered locations by translates of the kernel, we prove spectral convergence rates of the interpolants and all derivatives. By truncation of the power series of the kernel-based interpolants, we constructively generalize the classical Bernstein theorem concerning polynomial approximation of analytic functions to the multi-variate case. An application to machine learning algorithms is presented.
Document Type: text
File Description: application/pdf
Language: English
Relation: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.86.5528; http://www.mis.mpg.de/preprints/2006/preprint2006_155.pdf
Availability: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.86.5528
http://www.mis.mpg.de/preprints/2006/preprint2006_155.pdf
Rights: Metadata may be used without restrictions as long as the oai identifier remains attached to it.
Accession Number: edsbas.C6C1A6E3
Database: BASE
Description
Abstract:We introduce a class of analytic positive definite multivariate kernels which includes infinite dot product kernels as sometimes used in machine learning, certain new nonlinearly factorizable kernels and a kernel which is closely related to the Gaussian. Each such kernel reproduces in a certain ‘native ‘ Hilbert space of multivariate analytic functions. If functions from this space are interpolated in scattered locations by translates of the kernel, we prove spectral convergence rates of the interpolants and all derivatives. By truncation of the power series of the kernel-based interpolants, we constructively generalize the classical Bernstein theorem concerning polynomial approximation of analytic functions to the multi-variate case. An application to machine learning algorithms is presented.