Skip to yearly menu bar Skip to main content


Poster

FrameFusion: Combining Similarity and Importance for Video Token Reduction on Large Visual Language Models

Tianyu Fu · Tengxuan Liu · Qinghao Han · Guohao Dai · Shengen Yan · Huazhong Yang · Xuefei Ning · Yu Wang


Abstract: The increasing demand to process long and high-resolution videos significantly burdens Large Vision-Language Models (LVLMs) due to the enormous number of visual tokens.Existing token reduction methods primarily prune tokens based on importance metrics, such as accumulative attention scores. However, even important tokens may exhibit high redundancy caused by similarity among adjacent video frames and repetitive visual elements.To address this limitation, we propose FrameFusion, a novel token reduction approach integrating similarity-based merging with importance-based pruning.We conduct a thorough study on token similarity characteristics, revealing three key insights: (1) spatially corresponding vision tokens between adjacent frames have higher cosine similarities compared to other token pairs; (2) high token similarities prominently decrease in deeper model layers; and (3) token similarity rankings are highly consistent across different layers.Guided by these observations, FrameFusion computes token similarities exclusively between corresponding vision tokens from adjacent frames, applies token merging at initial successive layers followed by pruning in deeper layers, and adopts a cascaded merging strategy to further enhance efficiency.We evaluate FrameFusion comprehensively across six diverse LVLMs, ranging from 2B to 72B parameters, using five video benchmarks encompassing video retrieval, question-answering, and spatial-temporal understanding tasks.Experiments show that FrameFusion reduces vision tokens by 70\%, achieving 1.6 – 3.6$\times$ end-to-end speedups, with an average performance impact of less than 3\%.

Live content is unavailable. Log in and register to view live content