Skip to yearly menu bar Skip to main content


Poster

Constructing Ophthalmic MLLM for Positioning-diagnosis Collaboration Through Clinical Cognitive Chain Reasoning

Xinyao Liu · Diping Song


Abstract: Multimodal large language models (MLLMs) demonstrate significant potential in the field of medical diagnosis. However, they face critical challenges in specialized domains such as ophthalmology, particularly the fragmentation of annotation granularity and inconsistencies in clinical reasoning logic, which hinder precise cross-modal understanding. This paper introduces **FundusExpert**, the first ophthalmology-specific MLLM with integrated positioning-diagnosis reasoning capabilities, along with **FundusGen**, a dataset constructed through the intelligent **Fundus-Engine** system.Fundus-Engine automates localization and leverages MLLM-based semantic expansion to integrate global disease classification, local object detection, and fine-grained feature analysis within a single fundus image. Additionally, by constructing a clinically aligned cognitive chain, it guides the model to generate interpretable reasoning paths.FundusExpert, fine-tuned with instruction data from FundusGen, achieves the best performance in ophthalmic question-answering tasks, surpassing the average accuracy of the 40B MedRegA by 26.6\%. It also excels in zero-shot report generation tasks, achieving a clinical consistency of 77.0\%, significantly outperforming GPT-4o's 47.6\%. Furthermore, we reveal a scaling law between data quality and model capability($L \propto N^{0.33}$), demonstrating that the cognitive alignment annotations in FundusGen enhance data utilization efficiency. By integrating region-level localization with diagnostic reasoning chains, our work develops a scalable, clinically-aligned MLLM and explores a pathway toward bridging the visual-language gap in domain-specific MLLMs.

Live content is unavailable. Log in and register to view live content