Comparison of \ell -Norm SVR and Sparse Coding Algorithms for Linear Regression

Support vector regression (SVR) is a popular function estimation technique based on Vapnik's concept of support vector machine. Among many variants, the l 1 -norm SVR is known to be good at selecting useful features when the features are redundant. Sparse coding (SC) is a technique widely used...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE transaction on neural networks and learning systems Ročník 26; číslo 8; s. 1828 - 1833
Hlavní autoři: Qingtian Zhang, Xiaolin Hu, Bo Zhang
Médium: Journal Article
Jazyk:angličtina
Vydáno: IEEE 01.08.2015
Témata:
ISSN:2162-237X, 2162-2388
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Support vector regression (SVR) is a popular function estimation technique based on Vapnik's concept of support vector machine. Among many variants, the l 1 -norm SVR is known to be good at selecting useful features when the features are redundant. Sparse coding (SC) is a technique widely used in many areas and a number of efficient algorithms are available. Both l 1 -norm SVR and SC can be used for linear regression. In this brief, the close connection between the l 1 -norm SVR and SC is revealed and some typical algorithms are compared for linear regression. The results show that the SC algorithms outperform the Newton linear programming algorithm, an efficient l 1 -norm SVR algorithm, in efficiency. The algorithms are then used to design the radial basis function (RBF) neural networks. Experiments on some benchmark data sets demonstrate the high efficiency of the SC algorithms. In particular, one of the SC algorithms, the orthogonal matching pursuit is two orders of magnitude faster than a well-known RBF network designing algorithm, the orthogonal least squares algorithm.
ISSN:2162-237X
2162-2388
DOI:10.1109/TNNLS.2014.2377245