Skip to main navigation Skip to search Skip to main content

Cross-view person identification based on confidence-weighted human pose matching

Research output: Contribution to journalArticlepeer-review

12 Scopus citations

Abstract

Cross-view person identification (CVPI) from multiple temporally synchronized videos taken by multiple wearable cameras from different, varying views is a very challenging but important problem, which has attracted more interest recently. Current state-of-the-art performance of CVPI is achieved by matching appearance and motion features across videos, while the matching of pose features does not work effectively given the high inaccuracy of the 3D pose estimation on videos/images collected in the wild. To address this problem, we first introduce a new metric of confidence to the estimated location of each human-body joint in 3D human pose estimation. Then, a mapping function, which can be hand-crafted or learned directly from the datasets, is proposed to combine the inaccurately estimated human pose and the inferred confidence metric to accomplish CVPI. Specifically, the joints with higher confidence are weighted more in the pose matching for CVPI. Finally, the estimated pose information is integrated into the appearance and motion features to boost the CVPI performance. In the experiments, we evaluate the proposed method on three wearable-camera video datasets and compare the performance against several other existing CVPI methods. The experimental results show the effectiveness of the proposed confidence metric, and the integration of pose, appearance, and motion produces a new state-of-the-art CVPI performance.

Original languageEnglish
Article number8642932
Pages (from-to)3821-3835
Number of pages15
JournalIEEE Transactions on Image Processing
Volume28
Issue number8
DOIs
StatePublished - Aug 2019

Keywords

  • Confidence metric
  • cross-view person identification
  • human pose matching

Fingerprint

Dive into the research topics of 'Cross-view person identification based on confidence-weighted human pose matching'. Together they form a unique fingerprint.

Cite this