MMTraP: Multi-Sensor Multi-Agent Trajectory Prediction in BEV

Accurate detection and trajectory prediction of moving vehicles are essential for motion planning in autonomous driving systems. While traffic regulations provide clear boundaries, real-world scenarios remain unpredictable due to the complex interactions between vehicles. This challenge has driven s...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE open journal of vehicular technology Jg. 6; S. 1551 - 1567
Hauptverfasser: Sharma, Sushil, Das, Arindam, Sistu, Ganesh, Halton, Mark, Eising, Ciaran
Format: Journal Article
Sprache:Englisch
Veröffentlicht: IEEE 2025
Schlagworte:
ISSN:2644-1330, 2644-1330
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Accurate detection and trajectory prediction of moving vehicles are essential for motion planning in autonomous driving systems. While traffic regulations provide clear boundaries, real-world scenarios remain unpredictable due to the complex interactions between vehicles. This challenge has driven significant interest in learning-based approaches for trajectory prediction. We present MMTraP: M ulti-Sensor and M ulti-Agent Tra jectory P rediction in BEV. This method integrates camera, LiDAR, and radar data to create detailed Bird's-Eye-View representations of driving scenes. Our approach employs a hierarchical vector transformer architecture that first detects and classifies vehicle motion patterns before predicting future trajectories through spatiotemporal relationship modeling. This work specifically focuses on vehicle interactions and environmental constraints. Despite its significance, multi-agent trajectory prediction and moving object segmentation are still underexplored in the literature, especially in real-time applications. Our method leverages multisensor fusion to obtain precise BEV representations and predict vehicle trajectories. Our multi-sensor fusion approach achieves the highest vehicle Intersection over Union (IoU) of 63.23% and an overall mean IoU (mIoU) of 64.63%, demonstrating its effectiveness in utilizing all available sensor modalities. Additionally, we demonstrate vehicle segmentation and trajectory prediction capabilities across various lighting and weather conditions. The proposed approach has been rigorously evaluated using the nuScenes dataset. Results show that our method improves the accuracy of trajectory predictions and outperforms state-of-the-art techniques, particularly in challenging environments such as congested urban areas. For instance, in complex traffic scenarios, our approach achieves a relative improvement of 5% in trajectory prediction accuracy compared to baseline methods. This work advances vehicle-focused prediction systems by integrating multi-sensor BEV representation and interaction-aware transformers. Our approach shows promise in enhancing the reliability and accuracy of trajectory predictions for autonomous driving applications, potentially improving overall safety and efficiency in diverse driving environments.
ISSN:2644-1330
2644-1330
DOI:10.1109/OJVT.2025.3574385