Workshop
1st Workshop on Multimodal Sign Language Recognition
Hamzah Luqman, Raffaele Mineo, Maad Alowaifeer, Simone Palazzo, Motaz Alfarraj, Mufti Mahmud, Amelia Sorrenti, Federica Proietto Salanitri, Giovanni Bellitto, Concetto Spampinato, Silvio Giancola, Muhammad Haris Khan, Moi Hoon Yap, Ahmed Abul Hasanaath, Murtadha Aljubran, Sarah Alyami, Egidio Ragonese, Gaia Caligiore, Sabina Fontana, Senya Polikovsky, Sevgi Gurbuz, Kamrul Islam
Mon 20 Oct, 11:30 a.m. PDT
Sign language is a rich and expressive visual language that uses hand gestures, body movements, and facial expressions to convey meaning. With hearing impairment increasingly prevalent worldwide, Sign Language Recognition research is advancing to enable more inclusive communication technologies. The 1st Multimodal Sign Language Recognition Workshop (MSLR 2025) brings together researchers to explore vision, sensor, and generative based approaches. Emphasizing multimodal fusion of RGB video, depth maps, skeletal and facial keypoints, and radar data, the workshop highlights systems designed for real-world variability and privacy. Topics include statistical and neural sign-to-text and text-to-sign translation, cross-lingual and multilingual methods, multimodal generative synthesis, and inclusive dataset creation. Through keynotes, presentations, and challenges on continuous and isolated sign recognition, participants will engage with new benchmarks, metrics, and ethical data practices. The workshop also highlights privacy-preserving sensing and healthcare accessibility, inviting contributions from researchers across disciplines to shape the future of multimodal sign language technologies.
Live content is unavailable. Log in and register to view live content