Skip to yearly menu bar Skip to main content


Poster

PoseAnchor: Robust Root Position Estimation for 3D Human Pose Estimation

Jun-Hee Kim · Jumin Han · Seong-Whan Lee


Abstract:

Standard 3D human pose estimation (HPE) benchmarks employ root-centering, which normalizes poses relative to the pelvis but discards absolute root position information. While effective for evaluation, this approach limits real-world applications such as motion tracking, AR/VR, and human-computer interaction, where absolute root position is essential. Moreover, incorporating root position into these models often leads to performance degradation.To address these limitations, we introduce PoseAnchor, a unified framework that seamlessly integrates root position estimation while improving overall pose accuracy.PoseAnchor leverages Iterative Hard Thresholding Robust Least Squares Regression (ITRR), a novel robust regression approach introduced to 3D HPE for the first time. ITRR effectively mitigates the impact of noisy 2D detections, enabling more accurate root position estimation.With ITRR, PoseAnchor enables zero-shot root localization, allowing existing models to estimate absolute root positions without retraining or architectural modifications.ITRR identifies a support set of reliable joints based on their spatial relationships to achieve robust root estimation, effectively filtering out unreliable joints.Beyond zero-shot localization, PoseAnchor incorporates ITRR into a Data-Driven Training framework that selectively utilizes the support set to optimize pose learning.By dynamically filtering high-confidence joint data, PoseAnchor mitigates noise while improving robustness.Experiments demonstrate that PoseAnchor achieves state-of-the-art results, surpassing both root-centered and root-aware methods in fully trained settings, while also exhibiting strong zero-shot performance without retraining.

Live content is unavailable. Log in and register to view live content