Skip to yearly menu bar Skip to main content


Poster

A Token-level Text Image Foundation Model for Document Understanding

Tongkun Guan · Zining Wang · Pei Fu · Zhentao Guo · Wei Shen · Kai zhou · Tiezhu Yue · Chen Duan · Hao Sun · Qianyi Jiang · Junfeng Luo · Xiaokang Yang


Abstract:

In recent years, general visual foundation models (VFMs) have witnessed increasing adoption, particularly as image encoders for popular multi-modal large language models (MLLMs). However, without semantically fine-grained supervision, these models still encounter fundamental prediction errors in the context of downstream text-image-related tasks, i.e., perception, understanding and reasoning with images containing small and dense texts. To bridge this gap, we develop TokenFD, the first token-level visual foundation model specifically tailored for text-image-related tasks, designed to support a variety of traditional downstream applications. To facilitate the pretraining of TokenFD, we also devise a high-quality data production pipeline that constructs the first token-level image text dataset, TokenIT, comprising 20 million images and 1.8 billion token-mask pairs. Furthermore, leveraging this foundation with exceptional image-as-text capability, we seamlessly replace previous VFMs with TokenFD to construct a token-level visual-language MLLM, TokenVL, for VQA-based document understanding tasks. Finally, extensive experiments demonstrate the effectiveness of TokenFD and TokenVL. Code, demo, datasets, and weights will be available soon.

Live content is unavailable. Log in and register to view live content