Skip to yearly menu bar Skip to main content


Poster

FinMMR: Make Financial Numerical Reasoning More Multimodal, Comprehensive, and Challenging

Zichen Tang · Haihong E · Jiacheng Liu · Zhongjun Yang · Rongjin Li · Zihua Rong · Haoyang He · Zhuodi Hao · Xinyang Hu · Kun Ji · Ziyan Ma · Mengyuan Ji · Jun Zhang · Chenghao Ma · Qianhe Zheng · Yang Liu · Yiling Huang · Xinyi Hu · Qing Huang · Zijian Xie · Shiyao Peng


Abstract:

We present FinMMR, a novel bilingual multimodal benchmark tailored to evaluate the reasoning capabilities of multimodal large language models (MLLMs) in financial numerical reasoning tasks. Compared to existing benchmarks, our work introduces three significant advancements. (1) Multimodality: We meticulously transform existing financial reasoning datasets, and construct novel questions from the latest Chinese financial research reports. The dataset comprises 4.3K questions and 8.7K images spanning 14 categories, including tables, bar charts, and ownership structure charts. (2) Comprehensiveness: FinMMR encompasses 14 financial subdomains, including corporate finance, banking, and industry analysis, significantly exceeding existing benchmarks in financial domain knowledge breadth. (3) Challenge: Models are required to perform multi-step precise numerical reasoning by integrating financial knowledge with the understanding of complex financial images and text. The best-performing MLLM achieves only 51.4\% accuracy on Hard problems. We believe that FinMMR will drive advancements in enhancing the reasoning capabilities of MLLMs in real-world scenarios.

Live content is unavailable. Log in and register to view live content