Poster
RoboMM: All-in-One Multimodal Large Model for Robotic Manipulation
Feng yan · Fanfan Liu · Yiyang Huang · ZechaoGuan ZechaoGuan · Liming Zheng · Yufeng Zhong · Chengjian Feng · Lin Ma
In recent years, robotics has advanced significantly through the integration of larger models and large-scale datasets. However, challenges remain in applying these models to 3D spatial interactions and managing data collection costs. To address these issues, we propose the multimodal robotic manipulation model, \textit{RoboMM}, along with the comprehensive dataset, \textit{RoboData}.\textit{RoboMM} enhances 3D perception through camera parameters and occupancy supervision. Building on OpenFlamingo, it incorporates Modality-Isolation-Mask and multimodal decoder blocks, improving modality fusion and fine-grained perception. \textit{RoboData} offers the complete evaluation system by integrating several well-known datasets, achieving the first fusion of multi-view images, camera parameters, depth maps, and actions, and the space alignment facilitates comprehensive learning from diverse robotic datasets.Equipped with \textit{RoboData} and the unified physical space, \textit{RoboMM} is the first generalist policy that surpasses expert models, enabling simultaneous evaluation of all tasks across multiple datasets, rather than being limited to specific data or task selections.Its design significantly enhances robotic manipulation performance, increasing the average sequence length on the CALVIN from 1.7 to 3.5 and ensuring cross-embodiment capabilities, achieving state-of-the-art results across multiple datasets, including both simulated and real-world data.
Live content is unavailable. Log in and register to view live content