International Workshop on Observing and Understanding Hands in Action
Hyung Jin Chang, Rongyu Chen, Zicong Fan, Rao Fu, Kun He, Kailin Li, Take Ohkawa, Yoichi Sato, Linlin Yang, Lixin Yang, Angela Yao, Qi Ye, Linguang Zhang, Zhongqun Zhang
Abstract
The ninth edition of this workshop will emphasize the use of multimodal LLMs for hand-related tasks. Multimodal LLMs have revolutionized the perceptions of AI, and demonstrated groundbreaking contributions to multimodal understanding, zero-shot learning, and transfer learning. Those models can process and integrate information from different types of hand data (or modalities), allowing the model to better understand complex hand-object/-hand interaction situations by capturing richer, more diverse representations.
Video
Chat is not available.
Successful Page Load