Skip to yearly menu bar Skip to main content


Poster

TAPNext: Tracking Any Point (TAP) as Next Token Prediction

Artem Zholus · Carl Doersch · Yi Yang · Skanda Koppula · Viorica Patraucean · Xu He · Ignacio Rocco · Mehdi S. M. Sajjadi · Sarath Chandar · Ross Goroshin

Exhibit Hall I #899
[ ]
Tue 21 Oct 6:15 p.m. PDT — 8:15 p.m. PDT

Abstract:

Tracking Any Point (TAP) in a video is a challenging computer vision problem with many demonstrated applications in robotics, video editing, and 3D reconstruction. Existing methods for TAP rely heavily on complex tracking-specific inductive biases and heuristics, limiting their generality and potential for scaling. To address these challenges, we present TAPNext, a new approach that casts TAP as sequential masked token decoding. Our model is causal, tracks in a purely online fashion, and removes tracking-specific inductive biases. This enables TAPNext to run with minimal latency, and removes the temporal windowing required by many existing state of art trackers. Despite its simplicity, TAPNext achieves a new state-of-the-art tracking performance among both online and offline trackers. Finally, we present evidence that many widely used tracking heuristics emerge naturally in TAPNext through end-to-end training.

Live content is unavailable. Log in and register to view live content