A Bi-GRU-based encoder–decoder framework for multivariate time series forecasting

Drought forecasting is crucial for minimizing the effects of drought, alerting people to its dangers, and assisting decision-makers in taking preventative action. This article suggests an encoder–decoder framework for multivariate times series (EDFMTS) forecasting. EDFMTS is composed of three layers...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Soft computing (Berlin, Germany) Ročník 28; číslo 9-10; s. 6775 - 6786
Hlavní autori: Balti, Hanen, Ben Abbes, Ali, Farah, Imed Riadh
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: Berlin/Heidelberg Springer Berlin Heidelberg 01.05.2024
Springer Nature B.V
Predmet:
ISSN:1432-7643, 1433-7479
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:Drought forecasting is crucial for minimizing the effects of drought, alerting people to its dangers, and assisting decision-makers in taking preventative action. This article suggests an encoder–decoder framework for multivariate times series (EDFMTS) forecasting. EDFMTS is composed of three layers: a temporal attention context layer, a gated recurrent unit (GRU)-based decoder component, and a bidirectional gated recurrent unit (Bi-GRU)-based encoder component. The proposed framework was evaluated using multivariate gathered from various sources in China (remote-sensing sensors, climate sensors, biophysical sensors, and so on). According to experimental results, the proposed framework outperformed the baseline methods in univariate and multivariate times series (TS) forecasting. The correlation coefficient of determination ( R 2 ), root-mean-squared error (RMSE), and the mean absolute error (MAE) were used for the evaluation of the framework performance. The R 2 , RMSE, and MAE are 0.94, 0.20, and 0.13, respectively, for EDFMTS. In contrast, the RMSE provided by autoregressive integrated moving average (ARIMA), PROPHET, long short-term memory (LSTM), GRU and convolutional neural network (CNN)-LSTM are 0.72, 0.92, 0.36, 0.40, and 0.27, respectively.
Bibliografia:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1432-7643
1433-7479
DOI:10.1007/s00500-023-09531-9