Poster
CODA: Repurposing Continuous VAEs for Discrete Tokenization
Zeyu Liu · Zanlin Ni · Yeguo Hua · Xin Deng · Xiao Ma · Cheng Zhong · Gao Huang
[
Abstract
]
Abstract:
Discrete visual tokenizers transform images into a sequence of tokens, enabling token-based visual generation akin to language models. However, this process is inherently challenging, as it requires both \emph{compressing} visual signals into a compact representation and \emph{discretizing} them into a fixed set of codes. Traditional discrete tokenizers typically learn the two tasks jointly, often leading to unstable training, low codebook utilization, and limited reconstruction quality. In this paper, we introduce \textbf{CODA}(\textbf{CO}ntinuous-to-\textbf{D}iscrete \textbf{A}daptation), a framework that decouples compression and discretization. Instead of training discrete tokenizers from scratch, CODA adapts off-the-shelf continuous VAEs---already optimized for perceptual compression---into discrete tokenizers via a carefully designed discretization process. By primarily focusing on discretization, CODA ensures stable and efficient training while retaining the strong visual fidelity of continuous VAEs. Empirically, With $\mathbf{6 \times}$ less training budget compared to standard VQGAN, our approach achieves a remarkable codebook utilization of \textbf{100\%} and notable reconstruction FID (rFID) of $\mathbf{0.43}$ and $\mathbf{1.34}$ for $8 \times$ and $16 \times$ compression respectively.
Live content is unavailable. Log in and register to view live content