Topology-preserving transfer learning for weakly-supervised anomaly detection and segmentation

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

Models pre-trained on the ImageNet dataset are introduced to be exploited for knowledge transfer in numerous downstream computer vision tasks, including the weakly-supervised anomaly detection and segmentation area. Specifically, in anomaly segmentation, the former study shows that representing images with feature maps extracted by pre-trained models significantly improves over previous techniques. This kind of representation method requires both high-quality and task-specific features, but feature extractors obtained from ImageNet directly are very general. One intuition for obtaining stronger features is by transferring a pre-trained model to the target dataset. However, in this paper, we show that under weakly-supervised settings, naïve fine-tune techniques that typically work for supervised learning can lead to catastrophic feature space collapse and reduce performance greatly. Thus, we propose to apply a topology-preserving constraint during transferring. Our method preserves the topology graph to keep the feature space from collapsing under weakly-supervised settings. And then we combine the transferred model with a simple anomaly detection and segmentation baseline for performance evaluation. The experiments show that our method achieves competitive accuracy on several benchmarks meanwhile setting a new state-of-the-art for anomaly detection on CIFAR100/10 and BTAD datasets.

Original languageEnglish
Pages (from-to)77-84
Number of pages8
JournalPattern Recognition Letters
Volume170
DOIs
StatePublished - Jun 2023

Keywords

  • Anomaly detection
  • Topology preservation
  • Transfer learning
  • Weakly-supervised learning

Fingerprint

Dive into the research topics of 'Topology-preserving transfer learning for weakly-supervised anomaly detection and segmentation'. Together they form a unique fingerprint.

Cite this