Convergence of batch gradient algorithm with smoothing composition of group l0 and l1/2 regularization for feedforward neural networks
In this paper, we prove the convergence of batch gradient method for training feedforward neural network; we have proposed a new penalty term based on composition of smoothing L 1 / 2 penalty for weights vectors incoming to hidden nodes and smoothing group L 0 regularization for the resulting vector...
Uložené v:
| Vydané v: | Progress in artificial intelligence Ročník 11; číslo 3; s. 269 - 278 |
|---|---|
| Hlavní autori: | , |
| Médium: | Journal Article |
| Jazyk: | English |
| Vydavateľské údaje: |
Berlin/Heidelberg
Springer Berlin Heidelberg
01.09.2022
Springer Nature B.V |
| Predmet: | |
| ISSN: | 2192-6352, 2192-6360 |
| On-line prístup: | Získať plný text |
| Tagy: |
Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
|
| Shrnutí: | In this paper, we prove the convergence of batch gradient method for training feedforward neural network; we have proposed a new penalty term based on composition of smoothing
L
1
/
2
penalty for weights vectors incoming to hidden nodes and smoothing group
L
0
regularization for the resulting vector (BGSGL
0
L
1
/
2
). This procedure forces weights to become smaller in group level, after training, which allow to remove some redundant hidden nodes. Moreover, it can remove some redundant weights of the surviving hidden nodes. The conditions of convergence are given. The importance of our proposed regularization objective is also tested on numerical examples of classification and regression task. |
|---|---|
| Bibliografia: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ISSN: | 2192-6352 2192-6360 |
| DOI: | 10.1007/s13748-022-00285-3 |