Skip to yearly menu bar Skip to main content


Poster

Dita: Scaling Diffusion Transformer for Generalist Vision-Language-Action Policy

Zhi Hou · Tianyi Zhang · Yuwen Xiong · Haonan Duan · Hengjun Pu · Ronglei Tong · Chengyang Zhao · Xizhou Zhu · Yu Qiao · Jifeng Dai · Yuntao Chen


Abstract:

While recent vision-language-action models trained on diverse robot datasets exhibit promising generalization capabilities with limited in-domain data, their reliance on compact action heads to predict discretized or continuous actions constrains adaptability to heterogeneous action spaces. We present Dita, a scalable framework that leverages Transformer architectures to directly denoise continuous action sequences through a unified multimodal diffusion process. Departing from prior methods that condition denoising on fused embeddings via shallow networks, Dita employs in-context conditioning—enabling fine-grained alignment between denoised actions and raw visual tokens from historical observations. This design explicitly models action deltas and environmental nuances. By capitalizing on the Transformer's scalability, Dita effectively unifies cross-embodiment datasets spanning varying camera perspectives, tasks, and action spaces. Evaluations across extensive benchmarks demonstrate state-of-the-art or comparative performance in simulation. Notably, Dita achieves robust real-world adaptation to environmental variances and complex long-horizon tasks through 10-shot finetuning, using only third-person camera inputs. The architecture establishes a versatile, lightweight and open-source baseline for generalist robot policy learning. The code and website are included in the supplementary materials.

Live content is unavailable. Log in and register to view live content