PT-TDGCN: Pre-Trained Trend-Aware Dynamic Graph Convolutional Network for Traffic Flow Prediction

Accurate traffic flow prediction is vital for intelligent transportation systems, yet strong spatiotemporal coupling and multi-scale dynamics make modelling difficult. Existing methods often rely on static adjacency and short input windows, limiting adaptation to time-varying spatial relations and l...

Full description

Saved in:
Bibliographic Details
Published in:Sensors (Basel, Switzerland) Vol. 25; no. 21; p. 6709
Main Authors: Yang, Hanqing, Wei, Sen, Wang, Yuanqing
Format: Journal Article
Language:English
Published: Switzerland MDPI AG 03.11.2025
Subjects:
ISSN:1424-8220, 1424-8220
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Accurate traffic flow prediction is vital for intelligent transportation systems, yet strong spatiotemporal coupling and multi-scale dynamics make modelling difficult. Existing methods often rely on static adjacency and short input windows, limiting adaptation to time-varying spatial relations and long-term patterns. To address these issues, we propose the Pre-trained Trend-aware Dynamic Graph Convolutional Network (PT-TDGCN), a two-stage framework. In the pre-training stage, a Transformer-based masked autoencoder learns segment-level temporal representations from historical sequences. In the prediction stage, three designs are integrated: (1) dynamic graph learning parameterized by tensor decomposition; (2) convolutional trend-aware attention that adds 1D convolutions to capture local trends while preserving global context; and (3) spatial graph convolution combined with lightweight fusion projection for aligning pre-trained, spatial, and temporal representations. Extensive experiments on four real-world datasets demonstrated that PT-TDGCN consistently outperformed 14 baseline models, achieving superior predictive accuracy and robustness.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:1424-8220
1424-8220
DOI:10.3390/s25216709