A new accelerated proximal gradient technique for regularized multitask learning framework

•A new accelerated gradient method for regularized multitask learning framework.•It is the first time that the combination of the extra-gradient and the inertial term is analyzed for Multitask learning problem.•Convergence and stability of the algorithm has been proved under specified conditions.•Ex...

Full description

Saved in:
Bibliographic Details
Published in:Pattern recognition letters Vol. 95; pp. 98 - 103
Main Authors: Verma, Mridula, Shukla, K.K.
Format: Journal Article
Language:English
Published: Amsterdam Elsevier B.V 01.08.2017
Elsevier Science Ltd
Subjects:
ISSN:0167-8655, 1872-7344
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:•A new accelerated gradient method for regularized multitask learning framework.•It is the first time that the combination of the extra-gradient and the inertial term is analyzed for Multitask learning problem.•Convergence and stability of the algorithm has been proved under specified conditions.•Experiments are conducted on three real multitask regression and two multitask classification datasets.•Algorithm outperforms earlier methods in terms of empirical convergence rate, standard accuracy measures and computational time. Multitask learning can be defined as the joint learning of related tasks using shared representations, such that each task can help other tasks to perform better. One of the various multitask learning frameworks is the regularized convex minimization problem, for which many optimization techniques are available in the literature. In this paper, we consider solving the non-smooth convex minimization problem with sparsity-inducing regularizers for the multitask learning framework, which can be efficiently solved using proximal algorithms. Due to slow convergence of traditional proximal gradient methods, a recent trend is to introduce acceleration to these methods, which increases the speed of convergence. In this paper, we present a new accelerated gradient method for the multitask regression framework, which not only outperforms its non-accelerated counterpart and traditional accelerated proximal gradient method but also improves the prediction accuracy. We also prove the convergence and stability of the algorithm under few specific conditions. To demonstrate the applicability of our method, we performed experiments with several real multitask learning benchmark datasets. Empirical results exhibit that our method outperforms the previous methods in terms of convergence, accuracy and computational time.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0167-8655
1872-7344
DOI:10.1016/j.patrec.2017.06.013