Skip to yearly menu bar Skip to main content


Poster

Recognizing Actions from Robotic View for Natural Human-Robot Interaction

Ziyi Wang · Peiming Li · Hong Liu · Zhichao Deng · Can Wang · Jun Liu · Junsong Yuan · Mengyuan Liu


Abstract:

Natural Human-Robot Interaction (N-HRI) requires a robot to recognize human actions at varying distances while accounting for disturbing motions from either the human or the robot. However, existing human action datasets are primarily designed for conventional Human-Robot Interaction (HRI) and fail to meet the unique requirements of N-HRI due to limited data, data modalities, task categories, and diversity in subjects and environments. To address this, we introduce ACTIVE, a large-scale human action dataset focused on ACtions from RoboTIc ViEw. Our dataset includes 30 action categories, 80 participants and 46,868 video instances, encompassing both point cloud and RGB modalities. During data capture, participants perform a range of human actions in diverse environments at varying distances (from 3m to 50m), while also executing disturbing motions, and with the robot itself in different states of motion. To recognize actions from a robotic view, we propose ACTIVE-PC, a Point Cloud-based method for ACTIVE dataset, which is able to recognize human actions at long distances using our proposed Multilevel Neighborhood Sampling, Layered Recognizers, and Elastic Ellipse Query, along with precise decoupling of kinematic interference and human actions. Experimental results verify the effectiveness of our method. Our project page is https://active2750.github.io/.

Live content is unavailable. Log in and register to view live content