Skip to yearly menu bar Skip to main content


Poster

iManip: Skill-Incremental Learning for Robotic Manipulation

Zexin Zheng · Jia-Feng Cai · Xiao-Ming Wu · Yilin Wei · Yu-Ming Tang · Wei-Shi Zheng · Ancong Wu


Abstract:

The development of a generalist agent with adaptive multiple manipulation skills has been a long-standing goal in the robotics community.In this paper, we explore a crucial task, skill-incremental learning, in robotic manipulation, which is to endow the robots with the ability to learn new manipulation skills based on the previous learned knowledge without re-training. First, we build a skill-incremental environment based on the RLBench benchmark, and explore how traditional incremental methods perform in this setting. We find that they suffer from severe catastrophic forgetting due to the previous methods on classification overlooking the characteristics of temporality and action complexity in robotic manipulation tasks. Towards this end, we propose an incremental Manipulation framework, termed iManip, to mitigate the above issues. We firstly design a temporal replay strategy to maintain the integrity of old skills when learning new skill. Moreover, we propose the extendable PerceiverIO, consisting of an action prompt with extendable weight to adapt to new action primitives in new skill. Extensive experiments show that our framework performs well in Skill-Incremental Learning. Codes of the skill-incremental environment with our framework will be open-source.

Live content is unavailable. Log in and register to view live content