Online sequential echo state network with sparse RLS algorithm for time series prediction

Recently, the echo state networks (ESNs) have been widely used for time series prediction. To meet the demand of actual applications and avoid the overfitting issue, the online sequential ESN with sparse recursive least squares (OSESN-SRLS) algorithm is proposed. Firstly, the ℓ0 and ℓ1 norm sparsity...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Neural networks Ročník 118; s. 32 - 42
Hlavní autoři: Yang, Cuili, Qiao, Junfei, Ahmad, Zohaib, Nie, Kaizhe, Wang, Lei
Médium: Journal Article
Jazyk:angličtina
Vydáno: United States Elsevier Ltd 01.10.2019
Témata:
ISSN:0893-6080, 1879-2782, 1879-2782
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Recently, the echo state networks (ESNs) have been widely used for time series prediction. To meet the demand of actual applications and avoid the overfitting issue, the online sequential ESN with sparse recursive least squares (OSESN-SRLS) algorithm is proposed. Firstly, the ℓ0 and ℓ1 norm sparsity penalty constraints of output weights are separately employed to control the network size. Secondly, the sparse recursive least squares (SRLS) algorithm and the subgradients technique are combined to estimate the output weight matrix. Thirdly, an adaptive selection mechanism for the ℓ0 or ℓ1 norm regularization parameter is designed. With the selected regularization parameter, it is proved that the developed SRLS shows comparable or better performance than the regular RLS. Furthermore, the convergence of OSESN-SRLS is theoretically analyzed to guarantee its effectiveness. Simulation results illustrate that the proposed OSESN-SRLS always outperforms other existing ESNs in terms of estimation accuracy and network compactness. •The online sequential ESN with sparse RLS algorithm is studied to improve estimation accuracy and network compactness.•The network size is controlled by the ℓ0 and ℓ1 norm sparsity penalty constraints.•The estimation performance is improved by the regularization parameters selection rule.•The algorithm convergence is analyzed to guarantee its effectiveness.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0893-6080
1879-2782
1879-2782
DOI:10.1016/j.neunet.2019.05.006