Loss landscapes and optimization in over-parameterized non-linear systems and neural networks

The success of deep learning is due, to a large extent, to the remarkable effectiveness of gradient-based optimization methods applied to large neural networks. The purpose of this work is to propose a modern view and a general mathematical framework for loss landscapes and efficient optimization in...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Applied and computational harmonic analysis Ročník 59; s. 85 - 116
Hlavní autoři: Liu, Chaoyue, Zhu, Libin, Belkin, Mikhail
Médium: Journal Article
Jazyk:angličtina
Vydáno: Elsevier Inc 01.07.2022
Témata:
ISSN:1063-5203
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:The success of deep learning is due, to a large extent, to the remarkable effectiveness of gradient-based optimization methods applied to large neural networks. The purpose of this work is to propose a modern view and a general mathematical framework for loss landscapes and efficient optimization in over-parameterized machine learning models and systems of non-linear equations, a setting that includes over-parameterized deep neural networks. Our starting observation is that optimization landscapes corresponding to such systems are generally not convex, even locally around a global minimum, a condition we call essential non-convexity. We argue that instead they satisfy PL⁎, a variant of the Polyak-Łojasiewicz condition [32,25] on most (but not all) of the parameter space, which guarantees both the existence of solutions and efficient optimization by (stochastic) gradient descent (SGD/GD). The PL⁎ condition of these systems is closely related to the condition number of the tangent kernel associated to a non-linear system showing how a PL⁎-based non-linear theory parallels classical analyses of over-parameterized linear equations. We show that wide neural networks satisfy the PL⁎ condition, which explains the (S)GD convergence to a global minimum. Finally we propose a relaxation of the PL⁎ condition applicable to “almost” over-parameterized systems.
ISSN:1063-5203
DOI:10.1016/j.acha.2021.12.009