Pose-guided feature region-based fusion network for occluded person re-identification

Learning distinguishing features from training datasets while filtering features of occlusions is critical to person retrieval scenarios. Most of the current person re-identification (Re-ID) methods based on classification or deep metric representation learning tend to overlook occlusion issues on t...

Full description

Saved in:
Bibliographic Details
Published in:Multimedia systems Vol. 29; no. 3; pp. 1771 - 1783
Main Authors: Xie, Gengsheng, Wen, Xianbin, Yuan, Liming, Wang, Jianchen, Guo, Changlun, Jia, Yansong, Li, Minghao
Format: Journal Article
Language:English
Published: Berlin/Heidelberg Springer Berlin Heidelberg 01.06.2023
Springer Nature B.V
Subjects:
ISSN:0942-4962, 1432-1882
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Learning distinguishing features from training datasets while filtering features of occlusions is critical to person retrieval scenarios. Most of the current person re-identification (Re-ID) methods based on classification or deep metric representation learning tend to overlook occlusion issues on the training set. Such representations from obstacles are easily over-fitted and misleading due to being considered as a part of the human body. To alleviate the occlusion problem, we propose a pose-guided feature region-based fusion network (PFRFN), to utilize pose landmarks as guidance to guide local learning for a good property of local feature, and the representation learning risk is evaluated on each part loss separately. Compared with only using global classification loss, concurrently considering local loss and the results of robust pose estimation enable the deep network to learn the representations of the body parts that prominently displayed in the image and gain the discriminative faculties on occluded scenes. Experimental results on multiple datasets, i.e., Market-1501, DukeMTMC, CUHK03, demonstrate the effectiveness of our method in a variety of scenarios.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0942-4962
1432-1882
DOI:10.1007/s00530-021-00752-2