Poster
Advancing Visual Large Language Model for Multi-granular Versatile Perception
WentaoXiang WentaoXiang · Haoxian Tan · Cong Wei · Yujie Zhong · Dengjie Li · Yujiu Yang
Perception is a fundamental task in the field of computer vision, encompassing a diverse set of subtasks that can be systematically categorized into four distinct groups based on two critical dimensions: prediction type and instruction type. Notably, existing researches often focus solely on a limited subset of these potential combinations, which constrains their applicability and versatility across various contexts. In response to this challenge, we present MVP, a novel and unified Visual Large Language Model (VLLM) framework designed to integrate both word-based and sentence-based perception tasks alongside box and mask predictions, all within a single framework. MVP employs an innovative multi-granularity decoder coupled with a unified prompt template, which together enable the seamless joint training of a wide array of tasks, including but not limited to panoptic segmentation, detection, grounding, and referring expression segmentation. Furthermore, we introduce a query enhancement strategy aimed at harnessing the decoding and generative capabilities inherent in large language models. Extensive experiments conducted across a range of benchmarks in both word-based and sentence-based perception tasks substantiate the efficacy of our framework.
Live content is unavailable. Log in and register to view live content