MVUDA: Unsupervised Domain Adaptation for Multi-view Pedestrian Detection

Uložené v:
Podrobná bibliografia
Názov: MVUDA: Unsupervised Domain Adaptation for Multi-view Pedestrian Detection
Autori: Brorsson, Erik, 1997, Svensson, Lennart, 1976, Bengtsson, Kristofer, 1979, Åkesson, Knut, 1972
Zdroj: Machine Vision and Applications. 37(1)
Predmety: Pseudo-labeling, Multi-view object detection, Self-training, Unsupervised domain adaptation
Popis: We address multi-view pedestrian detection in a setting where labeled data is collected using a multi-camera setup different from the one used for testing. While recent multi-view pedestrian detectors perform well on the camera rig used for training, their performance declines when applied to a different setup. To facilitate seamless deployment across varied camera rigs, we propose an unsupervised domain adaptation (UDA) method that adapts the model to new rigs without requiring additional labeled data. Specifically, we leverage the mean teacher self-training framework with a novel pseudo-labeling technique tailored to multi-view pedestrian detection. This method achieves state-of-the-art performance on multiple benchmarks, including MultiviewXWildtrack. Unlike previous methods, our approach eliminates the need for external labeled monocular datasets, thereby reducing reliance on labeled data. Extensive evaluations demonstrate the effectiveness of our method and validate key design choices. By enabling robust adaptation across camera setups, our work enhances the practicality of multi-view pedestrian detectors and establishes a strong UDA baseline for future research.
Popis súboru: electronic
Prístupová URL adresa: https://research.chalmers.se/publication/549507
https://research.chalmers.se/publication/549507/file/549507_Fulltext.pdf
Databáza: SwePub
Popis
Abstrakt:We address multi-view pedestrian detection in a setting where labeled data is collected using a multi-camera setup different from the one used for testing. While recent multi-view pedestrian detectors perform well on the camera rig used for training, their performance declines when applied to a different setup. To facilitate seamless deployment across varied camera rigs, we propose an unsupervised domain adaptation (UDA) method that adapts the model to new rigs without requiring additional labeled data. Specifically, we leverage the mean teacher self-training framework with a novel pseudo-labeling technique tailored to multi-view pedestrian detection. This method achieves state-of-the-art performance on multiple benchmarks, including MultiviewXWildtrack. Unlike previous methods, our approach eliminates the need for external labeled monocular datasets, thereby reducing reliance on labeled data. Extensive evaluations demonstrate the effectiveness of our method and validate key design choices. By enabling robust adaptation across camera setups, our work enhances the practicality of multi-view pedestrian detectors and establishes a strong UDA baseline for future research.
ISSN:14321769
09328092
DOI:10.1007/s00138-025-01764-y