Feature selection in machine learning: an exact penalty approach using a Difference of Convex function Algorithm

We develop an exact penalty approach for feature selection in machine learning via the zero-norm ℓ 0 -regularization problem. Using a new result on exact penalty techniques we reformulate equivalently the original problem as a Difference of Convex (DC) functions program. This approach permits us to...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Machine learning Jg. 101; H. 1-3; S. 163 - 186
Hauptverfasser: Le Thi, Hoai An, Le, Hoai Minh, Pham Dinh, Tao
Format: Journal Article
Sprache:Englisch
Veröffentlicht: New York Springer US 01.10.2015
Springer Nature B.V
Springer Verlag
Schlagworte:
ISSN:0885-6125, 1573-0565
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:We develop an exact penalty approach for feature selection in machine learning via the zero-norm ℓ 0 -regularization problem. Using a new result on exact penalty techniques we reformulate equivalently the original problem as a Difference of Convex (DC) functions program. This approach permits us to consider all the existing convex and nonconvex approximation approaches to treat the zero-norm in a unified view within DC programming and DCA framework. An efficient DCA scheme is investigated for the resulting DC program. The algorithm is implemented for feature selection in SVM, that requires solving one linear program at each iteration and enjoys interesting convergence properties. We perform an empirical comparison with some nonconvex approximation approaches, and show using several datasets from the UCI database/Challenging NIPS 2003 that the proposed algorithm is efficient in both feature selection and classification.
Bibliographie:SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 14
ObjectType-Article-1
ObjectType-Feature-2
content type line 23
ISSN:0885-6125
1573-0565
DOI:10.1007/s10994-014-5455-y