Sparse and robust support vector machine with capped squared loss for large-scale pattern classification

Support vector machine (SVM), being considered one of the most efficient tools for classification, has received widespread attention in various fields. However, its performance is hindered when dealing with large-scale pattern classification tasks due to high memory requirements and running very slo...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Pattern recognition Ročník 153; s. 110544
Hlavní autoři: Wang, Huajun, Zhang, Hongwei, Li, Wenqian
Médium: Journal Article
Jazyk:angličtina
Vydáno: Elsevier Ltd 01.09.2024
Témata:
ISSN:0031-3203, 1873-5142
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Support vector machine (SVM), being considered one of the most efficient tools for classification, has received widespread attention in various fields. However, its performance is hindered when dealing with large-scale pattern classification tasks due to high memory requirements and running very slow. To address this challenge, we construct a novel sparse and robust SVM based on our newly proposed capped squared loss (named as Lcsl-SVM). To solve Lcsl-SVM, we first focus on establishing optimality theory of Lcsl-SVM via our defined proximal stationary point, which is convenient for us to efficiently characterize the Lcsl support vectors of Lcsl-SVM. We subsequently demonstrate that the Lcsl support vectors comprise merely a minor fraction of entire training data. This observation leads us to introduce the concept of the working set. Furthermore, we design a novel subspace fast algorithm with working set (named as Lcsl-ADMM) for solving Lcsl-SVM, which is proven that Lcsl-ADMM has both global convergence and relatively low computational complexity. Finally, numerical experiments show that Lcsl-ADMM has excellent performances in terms of getting the best classification accuracy, using the shortest time and presenting the smallest numbers of support vectors when solving large-scale pattern classification problems. •We establish a novel SVM model called capped squared loss SVM.•We prove the optimality theory for capped squared loss SVM.•We propose a novel subspace fast algorithm with working set to address the capped squared loss SVM.•We demonstrate that our algorithm can efficiently solve the capped squared loss SVM.
ISSN:0031-3203
1873-5142
DOI:10.1016/j.patcog.2024.110544