Browse by author
Lookup NU author(s): Yumin Zhang
Full text for this publication is not currently held within this repository. Alternative links are provided below where available.
© 2025, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Classifiers often learn to be biased corresponding to the class-imbalanced dataset, especially under the semi-supervised learning (SSL) set. While previous work tries to appropriately re-balance the classifiers by subtracting a class-irrelevant image’s logit, but lacks a firm theoretical basis. We theoretically analyze why exploiting a baseline image can refine pseudo-labels and prove that the black image is the best choice. We also indicated that as the training process deepens, the pseudo-labels before and after refinement become closer. Based on this observation, we propose a debiasing scheme dubbed LCGC, which Learning from Consistency Gradient Conflicting, by encouraging biased class predictions during training. We intentionally update the pseudo-labels whose gradient conflicts with the debiased logits, representing the optimization direction offered by the over-imbalanced classifier predictions. Then, we debiased the predictions by subtracting the baseline image logits during testing. Extensive experiments demonstrate that LCGC can significantly improve the prediction accuracy of existing CISSL models on public benchmarks.
Author(s): Xing W, Cheng Y, Yi H, Gao X, Wei X, Guo X, Zhang Y, Pang X
Publication type: Conference Proceedings (inc. Abstract)
Publication status: Published
Conference Name: 39th Annual AAAI Conference on Artificial Intelligence
Year of Conference: 2025
Pages: 21697-21706
Online publication date: 11/04/2025
Acceptance date: 02/04/2018
ISSN: 2374-3468
Publisher: Association for the Advancement of Artificial Intelligence
URL: https://doi.org/10.1609/aaai.v39i20.35474
DOI: 10.1609/aaai.v39i20.35474
Library holdings: Search Newcastle University Library for this item
ISBN: 9781577358978