Time Series Prediction Using Sparse Autoencoder and High-Order Fuzzy Cognitive Maps

The problem of time series prediction based on fuzzy cognitive maps (FCMs) is unresolved. Although many methods have been proposed to cope with this issue, the performance of these methods is far from satisfactory. Traditional FCM-based predictors have three limitations. First, current feature extra...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on fuzzy systems Vol. 28; no. 12; pp. 3110 - 3121
Main Authors: Wu, Kai, Liu, Jing, Liu, Penghui, Yang, Shanchao
Format: Journal Article
Language:English
Published: New York IEEE 01.12.2020
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects:
ISSN:1063-6706, 1941-0034
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The problem of time series prediction based on fuzzy cognitive maps (FCMs) is unresolved. Although many methods have been proposed to cope with this issue, the performance of these methods is far from satisfactory. Traditional FCM-based predictors have three limitations. First, current feature extraction operators are incapable of learning good representations of original time series. Second, current methods use just the output of FCMs to predict the next value; they do not directly utilize the important information of the latent features. Third, current FCM-based predictors optimize each component individually, thereby leading to low prediction accuracy. For example, these methods first optimize the feature extraction operator and then learn the FCMs from the latent features; they do not simultaneously optimize the whole prediction model. In this article, we develop a framework based on a sparse autoencoder (SAE) and a high-order FCM (HFCM) to address the time series prediction problem; we refer this framework as SAE-FCM. To overcome the first limitation of current methods, an SAE is employed to extract features from original time series. Unlike current FCM-based predictors, our method combines the output of both the SAE and the HFCM to calculate the predicted value, thereby overcoming the second limitation of traditional FCM-based predictors. In an application of the idea of "fine tuning" in deep learning, the weights of SAE-FCM can be updated by the batch gradient descent method if the prediction errors are great. Thus, we can optimize SAE-FCM as a whole and overcome the third limitation. We validate the performance of SAE-FCM on ten datasets. Compared with the experimental results obtained by using state-of-the-art methods, the experimental results obtained by using SAE-FCM demonstrate the effectiveness of our method. Extensive experiments also show that SAE-FCM can effectively overcome the above limitations.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1063-6706
1941-0034
DOI:10.1109/TFUZZ.2019.2956904