A deterministic gradient-based approach to avoid saddle points

Loss functions with a large number of saddle points are one of the major obstacles for training modern machine learning (ML) models efficiently. First-order methods such as gradient descent (GD) are usually the methods of choice for training ML models. However, these methods converge to saddle point...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:European journal of applied mathematics Ročník 34; číslo 4; s. 738 - 757
Hlavní autoři: Kreusser, L. M., Osher, S. J., Wang, B.
Médium: Journal Article
Jazyk:angličtina
Vydáno: United States Cambridge University Press 01.08.2023
Témata:
ISSN:0956-7925, 1469-4425
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Loss functions with a large number of saddle points are one of the major obstacles for training modern machine learning (ML) models efficiently. First-order methods such as gradient descent (GD) are usually the methods of choice for training ML models. However, these methods converge to saddle points for certain choices of initial guesses. In this paper, we propose a modification of the recently proposed Laplacian smoothing gradient descent (LSGD) [Osher et al., arXiv:1806.06317 ], called modified LSGD (mLSGD), and demonstrate its potential to avoid saddle points without sacrificing the convergence rate. Our analysis is based on the attraction region, formed by all starting points for which the considered numerical scheme converges to a saddle point. We investigate the attraction region’s dimension both analytically and numerically. For a canonical class of quadratic functions, we show that the dimension of the attraction region for mLSGD is $\lfloor (n-1)/2\rfloor$ , and hence it is significantly smaller than that of GD whose dimension is $n-1$ .
Bibliografie:USDOE Office of Science (SC)
SC0002722; SC0021142
ISSN:0956-7925
1469-4425
DOI:10.1017/S0956792522000316