Reconstructing 3D Human Pose from RGB-D Data with Occlusions

  • Bowen Dang
  • , Xi Zhao
  • , Bowen Zhang
  • , He Wang

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

We propose a new method to reconstruct the 3D human body from RGB-D images with occlusions. The foremost challenge is the incompleteness of the RGB-D data due to occlusions between the body and the environment, leading to implausible reconstructions that suffer from severe human-scene penetration. To reconstruct a semantically and physically plausible human body, we propose to reduce the solution space based on scene information and prior knowledge. Our key idea is to constrain the solution space of the human body by considering the occluded body parts and visible body parts separately: modeling all plausible poses where the occluded body parts do not penetrate the scene, and constraining the visible body parts using depth data. Specifically, the first component is realized by a neural network that estimates the candidate region named the “free zone”, a region carved out of the open space within which it is safe to search for poses of the invisible body parts without concern for penetration. The second component constrains the visible body parts using the “truncated shadow volume” of the scanned body point cloud. Furthermore, we propose to use a volume matching strategy, which yields better performance than surface matching, to match the human body with the confined region. We conducted experiments on the PROX dataset, and the results demonstrate that our method produces more accurate and plausible results compared with other methods.

Original languageEnglish
Article numbere14982
JournalComputer Graphics Forum
Volume42
Issue number7
DOIs
StatePublished - Oct 2023

Keywords

  • CCS Concepts
  • Reconstruction
  • • Computing methodologies → Shape modeling

Fingerprint

Dive into the research topics of 'Reconstructing 3D Human Pose from RGB-D Data with Occlusions'. Together they form a unique fingerprint.

Cite this