Generalization bounds for sparse random feature expansions

Random feature methods have been successful in various machine learning tasks, are easy to compute, and come with theoretical accuracy bounds. They serve as an alternative approach to standard neural networks since they can represent similar function spaces without a costly training phase. However,...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Applied and computational harmonic analysis Ročník 62; s. 310 - 330
Hlavní autoři: Hashemi, Abolfazl, Schaeffer, Hayden, Shi, Robert, Topcu, Ufuk, Tran, Giang, Ward, Rachel
Médium: Journal Article
Jazyk:angličtina
Vydáno: Elsevier Inc 01.01.2023
Témata:
ISSN:1063-5203, 1096-603X
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Random feature methods have been successful in various machine learning tasks, are easy to compute, and come with theoretical accuracy bounds. They serve as an alternative approach to standard neural networks since they can represent similar function spaces without a costly training phase. However, for accuracy, random feature methods require more measurements than trainable parameters, limiting their use for data-scarce applications. We introduce the sparse random feature expansion to obtain parsimonious random feature models. We leverage ideas from compressive sensing to generate random feature expansions with theoretical guarantees even in the data-scarce setting. We provide generalization bounds for functions in a certain class depending on the number of samples and the distribution of features. By introducing sparse features, i.e. features with random sparse weights, we provide improved bounds for low order functions. We show that our method outperforms shallow networks in several scientific machine learning tasks.
ISSN:1063-5203
1096-603X
DOI:10.1016/j.acha.2022.08.003