Skip to yearly menu bar Skip to main content


Poster

CoA-VLA: Improving Vision-Language-Action Models via Visual-Text Chain-of-Affordance

Jinming Li · Yichen Zhu · Zhibin Tang · Junjie Wen · Minjie Zhu · Xiaoyu Liu · Chengmeng Li · Ran Cheng · Yaxin Peng · Yan Peng · Feifei Feng


Abstract:

Robot foundation models, particularly Vision-Language-Action (VLA) models, have garnered significant attention for their ability to enhance robot policy learning, greatly improving robot's generalization and robustness. OpenAI’s recent model, O1, showcased impressive capabilities in solving complex problems by utilizing extensive reasoning chains. This prompts an important question: can robot models achieve better performance in multi-task, complex environments by reviewing prior observations and then providing task-specific reasoning to guide action prediction?In this paper, we introduce \textbf{Chain-of-Affordance (CoA-VLA)}, a novel approach to scaling robot models by incorporating reasoning in the format of sequential robot affordances to facilitate task completion. Specifically, we prompt the model to consider the following four types of affordances before taking action: (1) \textit{object affordance} — what object to manipulate and where it is; (2) \textit{grasp affordance} — the specific object part to grasp; (3) \textit{spatial affordance} — the optimal space to place the object; and (4) \textit{movement affordance} — the collision-free path for movement. We further transform each affordance into two prompting formats: \textbf{\textit{visual affordance and textual affordance}}. We introduce a novel vision-language co-injection module that integrates this knowledge into the policy network. This allows the robot to leverage essential contextual information during action inference, resulting in improved precision and robustness. Our experiments demonstrate that CoA-VLA outperforms state-of-the-art robot foundation models, including OpenVLA and Octo, on a variety of tasks. Furthermore, CoA-VLA exhibits strong generalization capabilities, including recognizing unseen object poses, identifying free space, and avoiding obstacles in novel environments.

Live content is unavailable. Log in and register to view live content