Skip to yearly menu bar Skip to main content


Poster

General Compression Framework for Efficient Transformer Object Tracking

Lingyi Hong · Jinglun Li · Xinyu Zhou · Shilin Yan · Pinxue Guo · Kaixun Jiang · Zhaoyu Chen · Shuyong Gao · Runze Li · Xingdong Sheng · Wei Zhang · Hong Lu · Wenqiang Zhang


Abstract: Previous works have attempted to improve tracking efficiency through lightweight architecture design or knowledge distillation from teacher models to compact student trackers. However, these solutions often sacrifice accuracy for speed to a great extent, and also have the problems of complex training process and structural limitations. Thus, we propose a general model compression framework for efficient transformer object tracking, named CompressTracker, to reduce model size while preserving tracking accuracy. Our approach features a novel stage division strategy that segments the transformer layers of the teacher model into distinct stages to break the limitation of model structure. Additionally, we also design a unique replacement training technique that randomly substitutes specific stages in the student model with those from the teacher model, as opposed to training the student model in isolation. Replacement training enhances the student model's ability to replicate the teacher model's behavior and simplifies the training process. To further forcing student model to emulate teacher model, we incorporate prediction guidance and stage-wise feature mimicking to provide additional supervision during the teacher model's compression process. Our framework CompressTracker is structurally agnostic, making it compatible with any transformer architecture. We conduct a series of experiment to verify the effectiveness and generalizability of our CompressTracker. Our CompressTracker-SUTrack, compressed from SUTrack, retains about 99% performance on LaSOT ($\mathbf{72.2\%}$ AUC) while achieves $\mathbf{2.42\times}$ speed up.

Live content is unavailable. Log in and register to view live content