Convergence of batch gradient learning algorithm with smoothing L1/2 regularization for Sigma–Pi–Sigma neural networks

Sigma–Pi–Sigma neural networks are known to provide more powerful mapping capability than traditional feed-forward neural networks. The L1/2 regularizer is very useful and efficient, and can be taken as a representative of all the Lq(0<q<1) regularizers. However, the nonsmoothness of L1/2 regu...

Full description

Saved in:
Bibliographic Details
Published in:Neurocomputing (Amsterdam) Vol. 151; pp. 333 - 341
Main Authors: Liu, Yan, Li, Zhengxue, Yang, Dakun, Mohamed, Kh.Sh, Wang, Jing, Wu, Wei
Format: Journal Article
Language:English
Published: Elsevier B.V 03.03.2015
Subjects:
ISSN:0925-2312, 1872-8286
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Be the first to leave a comment!
You must be logged in first