Scaling up stochastic gradient descent for non-convex optimisation

Stochastic gradient descent (SGD) is a widely adopted iterative method for optimizing differentiable objective functions. In this paper, we propose and discuss a novel approach to scale up SGD in applications involving non-convex functions and large datasets. We address the bottleneck problem arisin...

Full description

Saved in:
Bibliographic Details
Published in:Machine learning Vol. 111; no. 11; pp. 4039 - 4079
Main Authors: Mohamad, Saad, Alamri, Hamad, Bouchachia, Abdelhamid
Format: Journal Article
Language:English
Published: New York Springer US 01.11.2022
Springer Nature B.V
Subjects:
ISSN:0885-6125, 1573-0565
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Be the first to leave a comment!
You must be logged in first