Feature selection in machine learning: an exact penalty approach using a Difference of Convex function Algorithm

We develop an exact penalty approach for feature selection in machine learning via the zero-norm ℓ 0 -regularization problem. Using a new result on exact penalty techniques we reformulate equivalently the original problem as a Difference of Convex (DC) functions program. This approach permits us to...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Machine learning Ročník 101; číslo 1-3; s. 163 - 186
Hlavní autoři: Le Thi, Hoai An, Le, Hoai Minh, Pham Dinh, Tao
Médium: Journal Article
Jazyk:angličtina
Vydáno: New York Springer US 01.10.2015
Springer Nature B.V
Springer Verlag
Témata:
ISSN:0885-6125, 1573-0565
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:We develop an exact penalty approach for feature selection in machine learning via the zero-norm ℓ 0 -regularization problem. Using a new result on exact penalty techniques we reformulate equivalently the original problem as a Difference of Convex (DC) functions program. This approach permits us to consider all the existing convex and nonconvex approximation approaches to treat the zero-norm in a unified view within DC programming and DCA framework. An efficient DCA scheme is investigated for the resulting DC program. The algorithm is implemented for feature selection in SVM, that requires solving one linear program at each iteration and enjoys interesting convergence properties. We perform an empirical comparison with some nonconvex approximation approaches, and show using several datasets from the UCI database/Challenging NIPS 2003 that the proposed algorithm is efficient in both feature selection and classification.
Bibliografie:SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 14
ObjectType-Article-1
ObjectType-Feature-2
content type line 23
ISSN:0885-6125
1573-0565
DOI:10.1007/s10994-014-5455-y