Convergence of batch gradient learning algorithm with smoothing L1/2 regularization for Sigma–Pi–Sigma neural networks

Sigma–Pi–Sigma neural networks are known to provide more powerful mapping capability than traditional feed-forward neural networks. The L1/2 regularizer is very useful and efficient, and can be taken as a representative of all the Lq(0<q<1) regularizers. However, the nonsmoothness of L1/2 regu...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Neurocomputing (Amsterdam) Ročník 151; s. 333 - 341
Hlavní autoři: Liu, Yan, Li, Zhengxue, Yang, Dakun, Mohamed, Kh.Sh, Wang, Jing, Wu, Wei
Médium: Journal Article
Jazyk:angličtina
Vydáno: Elsevier B.V 03.03.2015
Témata:
ISSN:0925-2312, 1872-8286
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Sigma–Pi–Sigma neural networks are known to provide more powerful mapping capability than traditional feed-forward neural networks. The L1/2 regularizer is very useful and efficient, and can be taken as a representative of all the Lq(0<q<1) regularizers. However, the nonsmoothness of L1/2 regularization may lead to oscillation phenomenon. The aim of this paper is to develop a novel batch gradient method with smoothing L1/2 regularization for Sigma–Pi–Sigma neural networks. Compared with conventional gradient learning algorithm, this method produces sparser weights and simpler structure, and it improves the learning efficiency. A comprehensive study on the weak and strong convergence results for this algorithm are also presented, indicating that the gradient of the error function goes to zero and the weight sequence goes to a fixed value, respectively.
ISSN:0925-2312
1872-8286
DOI:10.1016/j.neucom.2014.09.031