Visual prediction method based on time series-driven LSTM model

Significant progress has been made in time series prediction and image processing problems. However, most of the studies have focused on either the field of time series or image processing separately, failing to integrate the advantages of both fields. To overcome the limitations of existing algorit...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Scientific reports Ročník 15; číslo 1; s. 38057 - 14
Hlavní autoři: Jumahong, Huxidan, Wang, Yongjie, Aili, Abuduwaili, Wang, Weina
Médium: Journal Article
Jazyk:angličtina
Vydáno: London Nature Publishing Group UK 30.10.2025
Nature Publishing Group
Nature Portfolio
Témata:
ISSN:2045-2322, 2045-2322
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Significant progress has been made in time series prediction and image processing problems. However, most of the studies have focused on either the field of time series or image processing separately, failing to integrate the advantages of both fields. To overcome the limitations of existing algorithms in image temporal inference, this paper proposes a novel visual prediction framework based on the time series forecasting model, which can predict single-frame or multi-frame images by thoroughly analyzing their spatio-temporal features. Firstly, the ViT image feature extraction module is constructed by randomly masking and reconstructing the image to analyze the learned image and extract the features. Then, the time series construction module is designed to convert the extracted features into the time series model suitable for the LSTM network. Finally, the time series data is predicted based on LSTM, and the predicted time series data is transformed into the predicted image. A series of experiments is done on three types of cloud image datasets. The results are analyzed superficially and demonstrate the effectiveness and feasibility of the proposed method in terms of image prediction performance.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:2045-2322
2045-2322
DOI:10.1038/s41598-025-21911-9