Sparse and robust alternating direction method of multipliers for large-scale classification learning

Support vector machine (SVM) is a highly effective method in terms of classification learning. Nonetheless, when faced with large-scale classification problems, the high computational complexity involved can pose a significant obstacle. To tackle this problem, we establish a new trimmed squared loss...

Full description

Saved in:
Bibliographic Details
Published in:Neurocomputing (Amsterdam) Vol. 652; p. 130893
Main Authors: Wang, Huajun, Li, Wenqian, Shao, Yuanhai, Zhang, Hongwei
Format: Journal Article
Language:English
Published: Elsevier B.V 01.11.2025
Subjects:
ISSN:0925-2312
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Support vector machine (SVM) is a highly effective method in terms of classification learning. Nonetheless, when faced with large-scale classification problems, the high computational complexity involved can pose a significant obstacle. To tackle this problem, we establish a new trimmed squared loss SVM model known as TSVM. This model can be designed for achieving both sparsity and robustness at the same time. A novel optimality theory has been developed for the nonsmooth and nonconvex TSVM. Utilizing this new theory, the innovative fast alternating direction method of multipliers with low computational complexity and working set has been proposed to solve TSVM. Numerical tests show the effectiveness of the new method regarding the computational speed, number of support vector and classification accuracy, outperforming eight alternative top solvers. As an illustration, when tackling the real dataset with more than 107 instances, compared to seven other algorithms, our algorithm exhibited a 34 times enhancement in computation time, alongside achieving a 6.5% enhancement in accuracy and a 25 times decrease in support vector rates.
ISSN:0925-2312
DOI:10.1016/j.neucom.2025.130893