L1/2 regularization learning for smoothing interval neural networks: Algorithms and convergence analysis

Interval neural networks can easily address uncertain information, since they are capable of handling various kinds of uncertainties inherently which are represented by interval. Lq (0 < q < 1) regularization was proposed after L1 regularization for better solution of sparsity problems, among...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Neurocomputing (Amsterdam) Jg. 272; S. 122 - 129
Hauptverfasser: Yang, Dakun, Liu, Yan
Format: Journal Article
Sprache:Englisch
Veröffentlicht: Elsevier B.V 10.01.2018
Schlagworte:
ISSN:0925-2312, 1872-8286
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Interval neural networks can easily address uncertain information, since they are capable of handling various kinds of uncertainties inherently which are represented by interval. Lq (0 < q < 1) regularization was proposed after L1 regularization for better solution of sparsity problems, among which L1/2 is of extreme importance and can be taken as a representative. However, weights oscillation might occur during learning process due to discontinuous derivative for L1/2 regularization. In this paper, a novel batch gradient algorithm with smoothing L1/2 regularization is proposed to prevent the weights oscillation for a smoothing interval neural network (SINN), which is the modified interval neural network. Here, by smoothing we mean that, in a neighborhood of the origin, we replace the absolute values of the weights by a smooth function for continuous derivative. Compared with conventional gradient learning algorithm with L1/2 regularization, this approach can obtain sparser weights and simpler structure, and improve the learning efficiency. Then we present a sufficient condition for convergence of SINN. Finally, simulation results illustrate the convergence of the main results.
ISSN:0925-2312
1872-8286
DOI:10.1016/j.neucom.2017.06.061