Skip to yearly menu bar Skip to main content


Poster

TurboTrain: Towards Efficient and Balanced Multi-Task Learning for Multi-Agent Perception and Prediction

Zewei Zhou · Zhihao Zhao · Tianhui Cai · Zhiyu Huang · Bolei Zhou · Jiaqi Ma


Abstract:

End-to-end training of multi-agent systems offers significant advantages in improving multi-task performance. However, training such models remains challenging and requires extensive manual design and monitoring. In this work, we introduce TurboTrain, a novel and efficient training framework for multi-agent perception and prediction. TurboTrain comprises two key components: a multi-agent spatiotemporal pretraining scheme based on masked reconstruction learning and a balanced multi-task learning strategy based on gradient conflict suppression. By streamlining the training process, our framework eliminates the need for manually designing and tuning complex multi-stage training pipelines, substantially reducing training time and improving performance. We evaluate TurboTrain on a real-world cooperative driving dataset and demonstrate that it further improves the performance of state-of-the-art multi-agent perception and prediction models by nearly 9%. Our results highlight that pretraining effectively captures spatiotemporal multi-agent features and significantly benefits downstream tasks. Moreover, the proposed balanced multi-task learning strategy enhances cooperative detection and prediction. The codebase will be released to facilitate future multi-agent multi-task research.

Live content is unavailable. Log in and register to view live content