FairBias: Mitigating bias in medical image diagnosis with mixed noise and class imbalance

  • Saeed Iqbal
  • , Xiaopin Zhong
  • , Muhammad Attique Khan
  • , Zongze Wu
  • , Nouf Abdullah Almujally
  • , Weixiang Liu
  • , Amir Hussain

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

In medical image diagnosis, class-unbalanced and noisy datasets provide a challenge to deep learning algorithms. Biased models, poor performance for minority classes, and a decline in confidence in AI-based diagnosis are the results of these problems. These problems, especially class imbalance, are not adequately addressed by current approaches like QMix. To address these challenges, we propose FairBias, a novel framework that integrates class-aware sample separation, bias-aware loss functions, dynamic class reweighting, and advanced data augmentation to mitigate bias while maintaining high diagnostic accuracy. FairBias achieves significant fairness improvements by reducing the True Positive Rate (TPR) gap between majority and minority subgroups from 15 % to 5 % on datasets such as CheXpert and Breast MRI/FFDM. Additionally, FairBias narrows the Equal Opportunity (EO) disparity from 12 % to 3 % and improves Demographic Parity (DP) by ensuring that predicted positive rates across subgroups differ by less than 2 %. These fairness enhancements are achieved without compromising diagnostic performance, as evidenced by AUC values of 0.97 on Breast MRI/FFDM, 0.95 on Hep-2, 0.92 on SOKL, and 0.96 on CheXpert. Furthermore, FairBias demonstrates robustness under high noise ratios, achieving a Kappa score of 88.5 % on the Hep-2 dataset with 55 % Mis-Labeled Low-Quality Samples (MLQS) and 20 % Mis-High-Quality Samples (MHQS), outperforming state-of-the-art (SOTA) methods like QMix and vlm-fairness. It obtains AUC values of 0.97 on Breast MRI/FFDM, 0.95 on Hep-2, 0.92 on SOKL, and 0.96 on CheXpert. By addressing both mixed noise and class imbalance, FairBias ensures equitable and accurate disease diagnosis, particularly for minority classes that are often overlooked by existing systems. The framework's limitations, including computational complexity and generalization to new modalities, are discussed, along with future directions to enhance its scalability and applicability. Overall, FairBias presents a significant step toward fairer and more reliable AI-driven medical image diagnosis, bridging critical gaps in current SOTA techniques.

Original languageEnglish
Article number130910
JournalNeurocomputing
Volume651
DOIs
StatePublished - 28 Oct 2025
Externally publishedYes

Keywords

  • Bias – aware loss functions
  • Class – aware sample separation
  • Domain – invariant feature extraction
  • Dynamic class reweighting
  • Focal loss
  • Gaussian mixture model (GMM)
  • Multi – domain adaptation

Fingerprint

Dive into the research topics of 'FairBias: Mitigating bias in medical image diagnosis with mixed noise and class imbalance'. Together they form a unique fingerprint.

Cite this