Skip to yearly menu bar Skip to main content


Poster

AgroBench: Vision-Language Model Benchmark in Agriculture

Risa Shinoda · Nakamasa Inoue · Hirokatsu Kataoka · Masaki Onishi · Yoshitaka Ushiku


Abstract:

Precise automated understanding of agricultural tasks such as disease identification is essential for the sustainable crop production. Recent advances in vision-language models (VLMs) are expected to further expand the range of agricultural tasks by facilitating human-model interaction through easy, text-based communication. Here, we introduce AgroBench (Agronomist AI Benchmark), a benchmark for evaluating VLM models across seven agricultural topics, covering key areas in agricultural engineering and relevant to real-world farming. Unlike recent agricultural VLM benchmarks, AgroBench is annotated by expert agronomists. Our AgroBench covers a state-of-the-art range of categories, including 197 crop categories and 682 disease categories, to thoroughly evaluate VLM capabilities. In our evaluation on AgroBench, we reveal that VLMs have room for improvement in fine-grained identification tasks. Notably, in weed identification, most open-source VLMs perform close to random. With our wide range of topics and expert-annotated categories, we analyze the types of errors made by VLMs and suggest potential pathways for future VLM development. Our dataset and code will be available.

Live content is unavailable. Log in and register to view live content