Online sequential ELM algorithm with forgetting factor for real applications

Sequential learning algorithms are a good choice for learning data one-by-one or chunk-by-chunk. Liang et al. has proposed OS-ELM algorithm based on the ordinary ELM algorithm, which produces better generalization performance than other famous sequential learning algorithms. One of the deficiencies...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Neurocomputing (Amsterdam) Jg. 261; S. 144 - 152
Hauptverfasser: Zhang, Haigang, Zhang, Sen, Yin, Yixin
Format: Journal Article
Sprache:Englisch
Veröffentlicht: Elsevier B.V 25.10.2017
Schlagworte:
ISSN:0925-2312, 1872-8286
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Sequential learning algorithms are a good choice for learning data one-by-one or chunk-by-chunk. Liang et al. has proposed OS-ELM algorithm based on the ordinary ELM algorithm, which produces better generalization performance than other famous sequential learning algorithms. One of the deficiencies of OS-ELM is that all the observations are weighted equally regardless of the acquisition time. However, the training data often have timeliness in many real industrial applications. In this paper, we propose a modified online sequential learning algorithm with the forgetting factor (named WOS-ELM algorithm) that weights the new observations more. Then a convergence analysis is presented to make sure the estimation of output weights tend to converge at the exponential speed with the arriving of new observations. For the determination of the value of forgetting factor, it would change with the forecast error automatically and get rid of excessive human interference. We employ several applications in the simulation part including time-series predication, time-variant system identification and the weather forecast problem. The simulation results show that WOS-ELM is more accurate and robust than other sequential learning algorithms.
ISSN:0925-2312
1872-8286
DOI:10.1016/j.neucom.2016.09.121