Video Visual Relation Detection via 3D Convolutional Neural Network

Video visual relation detection, which aims to detect the visual relations between objects in the form of relation triplet <subject, predicate, object> ( e.g. , "person-ride-bike", "dog-toward-car", etc. ), is a significant and fundamental task in computer vision. However,...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE access Ročník 10; s. 23748 - 23756
Hlavní autoři: Qu, Mingcheng, Cui, Jianxun, Su, Tonghua, Deng, Ganlin, Shao, Wenkai
Médium: Journal Article
Jazyk:angličtina
Vydáno: Piscataway IEEE 2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Témata:
ISSN:2169-3536, 2169-3536
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Video visual relation detection, which aims to detect the visual relations between objects in the form of relation triplet <subject, predicate, object> ( e.g. , "person-ride-bike", "dog-toward-car", etc. ), is a significant and fundamental task in computer vision. However, most of the existing works about visual relation instances are focused on static images. Modeling the non-static relationships in videos has drawn little attention due to lacking large-scale video dataset support. In our work, we propose a video dataset named Video Predicate Detection and Reasoning (VidPDR) for dynamic video visual relation detection, which consists of 1,000 videos with dense manually dynamic labeled annotations on 21 object classes and 37 predicates classes. Moreover, we propose a novel spatio-temporal feature extraction framework with 3D Convolutional Neural Networks (ST3DCNN), which includes three modules 1) object trajectory, 2) short-term relation prediction, and 3) greedy relational association. We conducted appropriate experiments on public datasets and our own dataset (VidPDR). Results demonstrate that our proposed method has a great improvement in comparison to the state-of-the-art baselines.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2022.3154423