Stochastic Normalized Gradient Descent with Momentum for Large-Batch Training
Stochastic gradient descent~(SGD) and its variants have been the dominating optimization methods in machine learning. Compared to SGD with small-batch training, SGD with large-batch training can better utilize the computational power of current multi-core systems such as graphics processing units~(G...
Uložené v:
| Vydané v: | arXiv.org |
|---|---|
| Hlavní autori: | , , , |
| Médium: | Paper |
| Jazyk: | English |
| Vydavateľské údaje: |
Ithaca
Cornell University Library, arXiv.org
15.04.2024
|
| Predmet: | |
| ISSN: | 2331-8422 |
| On-line prístup: | Získať plný text |
| Tagy: |
Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
|
| Shrnutí: | Stochastic gradient descent~(SGD) and its variants have been the dominating optimization methods in machine learning. Compared to SGD with small-batch training, SGD with large-batch training can better utilize the computational power of current multi-core systems such as graphics processing units~(GPUs) and can reduce the number of communication rounds in distributed training settings. Thus, SGD with large-batch training has attracted considerable attention. However, existing empirical results showed that large-batch training typically leads to a drop in generalization accuracy. Hence, how to guarantee the generalization ability in large-batch training becomes a challenging task. In this paper, we propose a simple yet effective method, called stochastic normalized gradient descent with momentum~(SNGM), for large-batch training. We prove that with the same number of gradient computations, SNGM can adopt a larger batch size than momentum SGD~(MSGD), which is one of the most widely used variants of SGD, to converge to an \(\epsilon\)-stationary point. Empirical results on deep learning verify that when adopting the same large batch size, SNGM can achieve better test accuracy than MSGD and other state-of-the-art large-batch training methods. |
|---|---|
| Bibliografia: | SourceType-Working Papers-1 ObjectType-Working Paper/Pre-Print-1 content type line 50 |
| ISSN: | 2331-8422 |
| DOI: | 10.48550/arxiv.2007.13985 |