Semi-supervised semantic segmentation leverages unlabeled data to reduce annotation costs, but current pseudo-labeling approaches discard uncertain pixels, ignore class imbalance, and lack explicit feature-level regularization. We present FARCLUSS, a unified framework that transforms prediction uncertainty into a learning asset through four synergistic components: (1) Fuzzy Pseudo-Labeling preserves soft class distributions from top-K teacher predictions instead of forcing hard one-hot assignments; (2) Uncertainty-Aware Dynamic Weighting modulates pixel-wise loss contributions via normalized entropy so that confident pixels drive learning while noisy ones are dampened; (3) Adaptive Class Rebalancing dynamically adjusts per-class loss weights based on batch-level pseudo-label frequencies; and (4) a Lightweight Contrastive Regularization module that tightens intra-class feature clusters and pushes apart inter-class prototypes without adding heavy memory overhead. Extensive experiments on Pascal VOC 2012 and Cityscapes demonstrate consistent improvements over competitive baselines across multiple label partitions and backbone architectures.
Retains soft class distributions from top-K teacher predictions, preserving inter-class ambiguity instead of forcing hard one-hot pseudo-labels.
Modulates each pixel's loss contribution using normalized entropy, so confident predictions drive learning while uncertain ones are dampened.
Dynamically adjusts per-class loss weights using batch-level pseudo-label frequencies to counteract class imbalance in unlabeled data.
Lightweight prototype-based contrastive loss that tightens intra-class feature clusters and separates inter-class prototypes without heavy memory overhead.
| Backbone | 1/16 | 1/8 | 1/4 | 1/2 |
|---|---|---|---|---|
| ResNet-50 | 72.90 | 76.18 | 76.50 | 77.69 |
| ResNet-101 | 76.40 | 78.20 | 79.00 | 80.30 |
| FARCLUSS (Ours) | 76.40 | 78.20 | 79.00 | 80.30 |
| Backbone | 1/16 | 1/8 | 1/4 | 1/2 |
|---|---|---|---|---|
| ResNet-50 | 75.20 | 77.50 | 78.00 | 79.60 |
| ResNet-101 | 77.20 | 78.80 | 80.00 | 81.00 |
| FARCLUSS (Ours) | 77.20 | 78.80 | 80.00 | 81.00 |
All experiments use DeepLabV3+ architecture with mean-teacher EMA. Best results highlighted.
@article{tarubinga2026farcluss,
title = {FARCLUSS: Fuzzy Adaptive Rebalancing and Contrastive
Uncertainty Learning for Semi-Supervised Semantic
Segmentation},
author = {Tarubinga, Ebenezer and Kalafatovich, Jenifer
and Lee, Seong-Whan},
journal = {Neural Networks},
year = {2026}
}
This research was supported by the Institute of Information & Communications Technology Planning & Evaluation (IITP) grant, funded by the Korea government (MSIT).