Stochastic Normalized Gradient Descent with Momentum for Large-Batch Training
Stochastic gradient descent~(SGD) and its variants have been the dominating optimization methods in machine learning. Compared to SGD with small-batch training, SGD with large-batch training can better utilize the computational power of current multi-core systems such as graphics processing units~(G...
Uloženo v:
| Vydáno v: | arXiv.org |
|---|---|
| Hlavní autoři: | , , , |
| Médium: | Paper |
| Jazyk: | angličtina |
| Vydáno: |
Ithaca
Cornell University Library, arXiv.org
15.04.2024
|
| Témata: | |
| ISSN: | 2331-8422 |
| On-line přístup: | Získat plný text |
| Tagy: |
Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
|
| Shrnutí: | Stochastic gradient descent~(SGD) and its variants have been the dominating optimization methods in machine learning. Compared to SGD with small-batch training, SGD with large-batch training can better utilize the computational power of current multi-core systems such as graphics processing units~(GPUs) and can reduce the number of communication rounds in distributed training settings. Thus, SGD with large-batch training has attracted considerable attention. However, existing empirical results showed that large-batch training typically leads to a drop in generalization accuracy. Hence, how to guarantee the generalization ability in large-batch training becomes a challenging task. In this paper, we propose a simple yet effective method, called stochastic normalized gradient descent with momentum~(SNGM), for large-batch training. We prove that with the same number of gradient computations, SNGM can adopt a larger batch size than momentum SGD~(MSGD), which is one of the most widely used variants of SGD, to converge to an \(\epsilon\)-stationary point. Empirical results on deep learning verify that when adopting the same large batch size, SNGM can achieve better test accuracy than MSGD and other state-of-the-art large-batch training methods. |
|---|---|
| Bibliografie: | SourceType-Working Papers-1 ObjectType-Working Paper/Pre-Print-1 content type line 50 |
| ISSN: | 2331-8422 |
| DOI: | 10.48550/arxiv.2007.13985 |