A Bi-GRU-based encoder–decoder framework for multivariate time series forecasting

Drought forecasting is crucial for minimizing the effects of drought, alerting people to its dangers, and assisting decision-makers in taking preventative action. This article suggests an encoder–decoder framework for multivariate times series (EDFMTS) forecasting. EDFMTS is composed of three layers...

Full description

Saved in:
Bibliographic Details
Published in:Soft computing (Berlin, Germany) Vol. 28; no. 9-10; pp. 6775 - 6786
Main Authors: Balti, Hanen, Ben Abbes, Ali, Farah, Imed Riadh
Format: Journal Article
Language:English
Published: Berlin/Heidelberg Springer Berlin Heidelberg 01.05.2024
Springer Nature B.V
Subjects:
ISSN:1432-7643, 1433-7479
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Drought forecasting is crucial for minimizing the effects of drought, alerting people to its dangers, and assisting decision-makers in taking preventative action. This article suggests an encoder–decoder framework for multivariate times series (EDFMTS) forecasting. EDFMTS is composed of three layers: a temporal attention context layer, a gated recurrent unit (GRU)-based decoder component, and a bidirectional gated recurrent unit (Bi-GRU)-based encoder component. The proposed framework was evaluated using multivariate gathered from various sources in China (remote-sensing sensors, climate sensors, biophysical sensors, and so on). According to experimental results, the proposed framework outperformed the baseline methods in univariate and multivariate times series (TS) forecasting. The correlation coefficient of determination ( R 2 ), root-mean-squared error (RMSE), and the mean absolute error (MAE) were used for the evaluation of the framework performance. The R 2 , RMSE, and MAE are 0.94, 0.20, and 0.13, respectively, for EDFMTS. In contrast, the RMSE provided by autoregressive integrated moving average (ARIMA), PROPHET, long short-term memory (LSTM), GRU and convolutional neural network (CNN)-LSTM are 0.72, 0.92, 0.36, 0.40, and 0.27, respectively.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1432-7643
1433-7479
DOI:10.1007/s00500-023-09531-9