Skip to yearly menu bar Skip to main content


Poster

Information Density Principle for MLLM Benchmarks

Chunyi Li · Xiaozhe Li · Zicheng Zhang · Yuan Tian · Ziheng Jia · Xiaohong Liu · Xiongkuo Min · Jia Wang · Haodong Duan · Kai Chen · Guangtao Zhai


Abstract:

With the emergence of Multimodal Large Language Models (MLLMs), hundreds of benchmarks have been developed to ensure the reliability of MLLMs in downstream tasks. However, the evaluation mechanism itself may not be reliable. For developers of MLLMs, questions remain about which benchmark to use and whether the test results meet their requirements. Therefore, we propose a critical principle of Information Density, which examines how much insight a benchmark can provide for the development of MLLMs. We characterize it from four key dimensions: (1) Fallacy, (2) Difficulty, (3) Redundancy, (4) Diversity. Through a comprehensive analysis of more than 10,000 samples, we measured the information density of 19 MLLM benchmarks. Experiments show that using the latest benchmarks in testing can provide more insight compared to previous ones, but there is still room for improvement in their information density. We hope this principle can promote the development and application of future MLLM benchmarks.

Live content is unavailable. Log in and register to view live content