Enhancing Event-Based Video Reconstruction With Bidirectional Temporal Information

Event-based video reconstruction has emerged as an appealing research direction to break through the limitations of traditional cameras to better record dynamic scenes. Most existing methods reconstruct each frame from its corresponding event subset in chronological order. Since the temporal informa...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on multimedia Vol. 27; pp. 4831 - 4843
Main Authors: Gao, Pinghai, Wang, Longguang, Ao, Sheng, Zhang, Ye, Guo, Yulan
Format: Journal Article
Language:English
Published: IEEE 2025
Subjects:
ISSN:1520-9210, 1941-0077
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Event-based video reconstruction has emerged as an appealing research direction to break through the limitations of traditional cameras to better record dynamic scenes. Most existing methods reconstruct each frame from its corresponding event subset in chronological order. Since the temporal information contained in the whole event sequence is not fully exploited, these methods suffer inferior reconstruction quality. In this paper, we propose to enhance event-based video reconstruction by leveraging the bidirectional temporal information in event sequences. The proposed model processes event sequences in a bidirectional fashion, allowing for exploiting bidirectional information in the whole sequence. Furthermore, a transformer-based temporal information fusion module is introduced to aggregate long-range information in both temporal and spatial dimensions. Additionally, we propose a new dataset for the event-based video reconstruction task which contains a variety of objects and movement patterns. Extensive experiments demonstrate that the proposed model outperforms existing state-of-the-art event-based video reconstruction methods both quantitatively and qualitatively.
ISSN:1520-9210
1941-0077
DOI:10.1109/TMM.2025.3543010