Poster
Contact-Aware Refinement of Human Pose Pseudo-Ground Truth via Bioimpedance Sensing
Maria-Paola Forte · Nikos Athanasiou · Giulia Ballardini · Jan Bartels · Katherine Kuchenbecker · Michael Black
Capturing accurate 3D human pose in the wild would provide valuable data for training motion-generation and pose-estimation methods.While video-based capture methods are increasingly accurate, we observe that they often fail in cases involving self-contact, such as a hand touching the face. Natural human behavior frequently includes self-contact, but determining when it occurs is challenging from video alone. In contrast, wearable bioimpedance sensing can cheaply and unobtrusively measure ground-truth skin-to-skin contact. Consequently, we propose a novel approach that combines visual pose estimators with bioimpedance sensing to capture the 3D pose of people by taking self-contact into account. Our method, BioTUCH, initializes the pose using an off-the-shelf estimator and introduces contact-aware pose optimization that minimizes reprojection error and deviations from the input estimate while enforcing vertex proximity constraints based on the measured start and end of self-touch. We validate our approach using a new dataset of synchronized RGB video, bioimpedance measurements, and 3D motion capture, demonstrating an average of 18.5% improvement in reconstruction accuracy. Our framework enables efficient large-scale collection of contact-aware training data for improving pose estimation and generation. Code and data will be shared publicly.
Live content is unavailable. Log in and register to view live content