A deterministic gradient-based approach to avoid saddle points

Loss functions with a large number of saddle points are one of the major obstacles for training modern machine learning (ML) models efficiently. First-order methods such as gradient descent (GD) are usually the methods of choice for training ML models. However, these methods converge to saddle point...

Full description

Saved in:
Bibliographic Details
Published in:European journal of applied mathematics Vol. 34; no. 4; pp. 738 - 757
Main Authors: Kreusser, L. M., Osher, S. J., Wang, B.
Format: Journal Article
Language:English
Published: United States Cambridge University Press 01.08.2023
Subjects:
ISSN:0956-7925, 1469-4425
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Loss functions with a large number of saddle points are one of the major obstacles for training modern machine learning (ML) models efficiently. First-order methods such as gradient descent (GD) are usually the methods of choice for training ML models. However, these methods converge to saddle points for certain choices of initial guesses. In this paper, we propose a modification of the recently proposed Laplacian smoothing gradient descent (LSGD) [Osher et al., arXiv:1806.06317 ], called modified LSGD (mLSGD), and demonstrate its potential to avoid saddle points without sacrificing the convergence rate. Our analysis is based on the attraction region, formed by all starting points for which the considered numerical scheme converges to a saddle point. We investigate the attraction region’s dimension both analytically and numerically. For a canonical class of quadratic functions, we show that the dimension of the attraction region for mLSGD is $\lfloor (n-1)/2\rfloor$ , and hence it is significantly smaller than that of GD whose dimension is $n-1$ .
Bibliography:USDOE Office of Science (SC)
SC0002722; SC0021142
ISSN:0956-7925
1469-4425
DOI:10.1017/S0956792522000316