Poster
A Conditional Probability Framework for Compositional Zero-shot Learning
Peng Wu · Qiuxia Lai · Hao Fang · Guo-Sen Xie · Yilong Yin · Xiankai Lu · Wenguan Wang
Compositional Zero-Shot Learning (CZSL) aims to recognize unseen combinations of known objects and attributes by leveraging knowledge from previously seen compositions. Traditional approaches primarily focus on disentangling attributes and objects, treating them as independent entities during learning. However, this assumption overlooks the semantic constraints and contextual dependencies inside a composition. For example, certain attributes naturally pair with specific objects (e.g., "striped'' applies to "zebra'' or "shirts'' but not "sky'' or "water''), while the same attribute can manifest differently depending on context (e.g., "young'' in "young tree'' vs "young dog''). Thus, capturing attribute-object interdependence remains a fundamental yet long-ignored challenge in CZSL.In this paper, we adopt a Conditional Probability Framework (CPF) to explicitly model attribute-object dependencies. We decompose the probability of a composition into two components: the likelihood of an object and the conditional likelihood of its attribute. To enhance object feature learning, we incorporate textual descriptors to highlight semantically relevant image regions. These enhanced object features then guide attribute learning through a cross-attention mechanism, ensuring better contextual alignment. By jointly optimizing object likelihood and conditional attribute likelihood, our method effectively captures compositional dependencies and generalizes well to unseen compositions. Extensive experiments on multiple CZSL benchmarks demonstrate the superiority of our approach. The source code will be released.
Live content is unavailable. Log in and register to view live content