Skip to yearly menu bar Skip to main content


Poster

Learnable Logit Adjustment for Imbalanced Semi-Supervised Learning under Class Distribution Mismatch

lee hyuck · Taemin Park · Heeyoung Kim


Abstract:

In class-imbalanced learning (CIL), post-hoc logit adjustment (LA) effectively mitigates class imbalance by adjusting biased logits according to label frequencies. Given the success of LA in CIL, recent class-imbalanced semi-supervised learning (CISSL) algorithms incorporated LA, leading to improved performance when labeled and unlabeled datasets share the same class distribution. However, a common real-world scenario involves the unknown class distribution of the unlabeled set, which may mismatch that of the labeled set. In this case, LA may result in an inappropriate degree of logit adjustments, potentially degrading classification performance due to its inability to incorporate the unknown class distribution of the unlabeled set. To address this problem, we propose a novel CISSL algorithm named learnable logit adjustment (LLA). Unlike the original LA, LLA learns the appropriate degree of logit adjustment by minimizing the class-averaged loss computed for both the labeled and unlabeled sets. Based on the learned degree, LLA refines the biased pseudo-labels of base semi-supervised learning algorithms and adjusts the biased class predictions on the test set by adjusting the logits. Experimental results on benchmark datasets demonstrate that LLA achieves state-of-the-art performance in CISSL.

Live content is unavailable. Log in and register to view live content