Poster
Fine-grained Spatiotemporal Grounding on Egocentric Videos
Shuo LIANG · Yiwu Zhong · Zi-Yuan Hu · Yeyao Tao · Liwei Wang
Spatiotemporal video grounding aims to localize target entities in videos based on textual queries, yet existing studies predominantly focus on exocentric videos. In comparison, egocentric video grounding remains underexplored despite its wide applications such as augmented reality and robotics. In this work, we conduct a systematic analysis of the discrepancies between egocentric and exocentric videos, revealing key challenges such as shorter object durations, sparser trajectories, smaller object sizes, and larger positional shifts. Further, we introduce EgoMask, the first pixel-level benchmark for fine-grained spatiotemporal grounding in egocentric videos. It is constructed by our proposed automatic annotation pipeline which annotates referring expressions and object masks across short-, mid-, and long-term videos. Additionally, we create EgoMask-Train, a large-scale training dataset to facilitate model development. Experiments demonstrate that the state-of-the-art spatiotemporal grounding models perform poorly on our benchmark EgoMask, but fine-tuning on EgoMask-Train yields significant improvements, while preserving performance on exocentric datasets. Our work thus provides essential resources and insights for advancing egocentric video understanding.
Live content is unavailable. Log in and register to view live content