Skip to yearly menu bar Skip to main content


Poster

Know "No" Better: A Data-Driven Approach for Enhancing Negation Awareness in CLIP

Junsung Park · Jungbeom Lee · Jongyoon Song · Sangwon Yu · Dahuin Jung · Sungroh Yoon


Abstract:

While CLIP has significantly advanced multimodal understanding by bridging vision and language, the inability to grasp negation — such as failing to differentiate concepts like "parking" from "no parking" — poses substantial challenges.By analyzing the data used in the public CLIP model's pre-training, we posit this limitation stems from a lack of negation-inclusive data.To address this, we introduce data generation pipelines that employ a large language model (LLM) and a multimodal LLM to produce negation-inclusive captions.Fine-tuning CLIP with data generated from our pipelines, we develop NegationCLIP, which enhances negation awareness while preserving the generality.Moreover, to enable a comprehensive evaluation of negation understanding, we propose NegRefCOCOg—a benchmark tailored to test VLMs' ability to interpret negation across diverse expressions and positions within a sentence.Experiments on various CLIP architectures validate the effectiveness of our data generation pipelines in enhancing CLIP's ability to perceive negation accurately.Additionally, NegationCLIP's enhanced negation awareness has practical applications across various multimodal tasks, demonstrated by performance gains in text-to-image generation and referring image segmentation.

Live content is unavailable. Log in and register to view live content