Poster
SVTRv2: CTC Beats Encoder-Decoder Models in Scene Text Recognition
Yongkun Du · Zhineng Chen · Hongtao Xie · Caiyan Jia · Yu-Gang Jiang
Connectionist temporal classification (CTC)-based scene text recognition (STR) methods, e.g., SVTR, are widely employed in OCR applications, mainly due to their simple architecture, which only contains a visual model and a CTC-aligned linear classifier, and therefore fast inference. However, they generally exhibit worse accuracy than encoder-decoder-based methods (EDTRs) due to struggling with text irregularity and linguistic missing. To address these challenges, we propose SVTRv2, a CTC model endowed with the ability to handle text irregularities and model linguistic context. First, a multi-size resizing strategy is proposed to resize text instances to appropriate predefined sizes, effectively avoiding severe text distortion. Meanwhile, we introduce a feature rearrangement module to ensure that visual features accommodate the requirement of CTC, thus alleviating the alignment puzzle. Second, we propose a semantic guidance module. It integrates linguistic context into the visual features, allowing CTC model to leverage language information for improved accuracy. Moreover, this module can be omitted at the inference stage and would not increase the time cost. We extensively evaluate SVTRv2 in both standard and recent challenging benchmarks, where SVTRv2 is fairly compared to mainstream STR models across multiple scenarios, including different types of text irregularity, languages, long text, and whether employing pretraining. The results indicate that SVTRv2 surpasses most EDTRs across the scenarios in terms of accuracy and inference speed.
Live content is unavailable. Log in and register to view live content