Skip to yearly menu bar Skip to main content


Workshop

Workshop on Safe and Trustworthy Multimodal AI Systems

Carlos Hinojosa, Yinpeng Dong, Adel Bibi, Jindong Gu, Yichi Zhang, Wenxuan Zhang, Lama Alssum, Andres Villa, Juan Carlos L. Alcazar, Chen Zhao, Lingjuan Lyu, Mohamed Elhoseiny, Bernard Ghanem, Philip Torr

Sun 19 Oct, noon PDT

Multimodal systems are transforming AI by enabling models to understand and act across language, vision, and other modalities, driving advances in robotics, autonomous driving, and scientific discovery. However, these capabilities raise serious safety and trustworthiness concerns, as traditional safeguards often fall short in multimodal contexts. The Workshop on Safe and Trustworthy Multimodal AI Systems (SaFeMM-AI) at ICCV 2025 brings together the computer vision community to address challenges including hallucinations, privacy leakage, and jailbreak vulnerabilities, and to promote the development of safer, more robust, and reliable multimodal models that can handle unsafe or adversarial inputs and consistently produce trustworthy outputs.

Live content is unavailable. Log in and register to view live content