Multikernel Passive Stochastic Gradient Algorithms and Transfer Learning

This article develops a novel passive stochastic gradient algorithm. In passive stochastic approximation, the stochastic gradient algorithm does not have control over the location where noisy gradients of the cost function are evaluated. Classical passive stochastic gradient algorithms use a kernel...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE transactions on automatic control Ročník 67; číslo 4; s. 1792 - 1805
Hlavní autoři: Krishnamurthy, Vikram, Yin, George
Médium: Journal Article
Jazyk:angličtina
Vydáno: New York IEEE 01.04.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Témata:
ISSN:0018-9286, 1558-2523
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:This article develops a novel passive stochastic gradient algorithm. In passive stochastic approximation, the stochastic gradient algorithm does not have control over the location where noisy gradients of the cost function are evaluated. Classical passive stochastic gradient algorithms use a kernel that approximates a Dirac delta to weigh the gradients based on how far they are evaluated from the desired point. In this article, we construct a multikernel passive stochastic gradient algorithm. The algorithm performs substantially better in high dimensional problems and incorporates variance reduction. We analyze the weak convergence of the multikernel algorithm and its rate of convergence. In numerical examples, we study the multikernel version of the passive least mean squares algorithm for transfer learning to compare the performance with the classical passive version.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0018-9286
1558-2523
DOI:10.1109/TAC.2021.3079280