Convergence of a Gradient-Based Learning Algorithm With Penalty for Ridge Polynomial Neural Networks

Recently there have been renewed interests in high order neural networks (HONNs) for its powerful mapping capability. Ridge polynomial neural network (RPNN) is an important kind of HONNs, which always occupies a key position as an efficient instrument in the tasks of classification or regression. In...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE access Ročník 9; s. 28742 - 28752
Hlavní autoři: Fan, Qinwei, Peng, Jigen, Li, Haiyang, Lin, Shoujin
Médium: Journal Article
Jazyk:angličtina
Vydáno: Piscataway IEEE 2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Témata:
ISSN:2169-3536, 2169-3536
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Recently there have been renewed interests in high order neural networks (HONNs) for its powerful mapping capability. Ridge polynomial neural network (RPNN) is an important kind of HONNs, which always occupies a key position as an efficient instrument in the tasks of classification or regression. In order to make the convergence speed faster and the network generalization ability stronger, we introduce a regularization model for RPNN with Group Lasso penalty, which deals with the structural sparse problem at the group level in this paper. Nevertheless, there are two main obstacles for introducing the Group Lasso penalty, one is numerical oscillation and the other is convergence analysis challenge. In doing so, we adopt smoothing function to approximate the Group Lasso penalty to overcome these drawbacks. Meanwhile, strong and weak convergence theorems, and monotonicity theorems are provided for this novel algorithm. We also demonstrate the efficiency of our proposed algorithm by numerical experiments, and compare it to the no regularizer, <inline-formula> <tex-math notation="LaTeX">L_{2} </tex-math></inline-formula> regularizer, <inline-formula> <tex-math notation="LaTeX">L_{1/2} </tex-math></inline-formula> regularizer, smoothing <inline-formula> <tex-math notation="LaTeX">L_{1/2} </tex-math></inline-formula> regularizer, and the Group Lasso regularizer, and also the relevant theoretical analysis has been verified.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2020.3048235