Poster
One Last Attention for Your Vision-Language Model
Liang Chen · Ghazi Shazan Ahmad · Tianjun Yao · Lingqiao Liu · Zhiqiang Shen
Exhibit Hall I #129
[
Abstract
]
Tue 21 Oct 2:45 p.m. PDT
— 4:45 p.m. PDT
Abstract:
Pretrained vision-language models (VLMs), such as CLIP, achieve remarkable zero-shot performance, yet their downstream potential hinges on effective fine-tuning. Most adaptation methods typically focus on refining representation from separate modalities (text or vision) but neglect the critical role of their fused representations in the decision-making process, \emph{\ie} rational matrix that drives the final prediction. To bridge the gap, we propose a simple yet effective $\textbf{R}$ational $\textbf{Ada}$ptaion (RAda) to explicitly exploit the final fused representation during fine-tuning. RAda employs a learned mask, obtained from a lightweight attention layer attached at the end of a VLM, to dynamically calibrate the contribution of each element in the rational matrix, enabling targeted adjustments to the final cross-modal interactions without incurring costly modifications to intermediate features. Experiments in different settings ($i.e.$ updating, or freezing pretrained encoders in adaptation, and test-time training that can only access the unlabeled test data) show that RAda serves as a versatile fine-tuning technique, improving the baseline with minimal code and performing comparably against current arts in most settings.
Live content is unavailable. Log in and register to view live content
Successful Page Load