Communication Dans Un Congrès Année : 2025

Provably Safeguarding a Classifier from OOD and Adversarial Samples: an Extreme Value Theory Approach

Résumé

This paper introduces a novel method, Sample-efficient Probabilistic Detection using Extreme Value Theory (SPADE), which transforms a classifier into an abstaining classifier, offering provable protection against out-of-distribution and adversarial samples. The approach is based on a Generalized Extreme Value (GEV) model of the training distribution in the classifier's latent space, enabling the formal characterization of OOD samples. Interestingly, under mild assumptions, the GEV model also allows for formally characterizing adversarial samples. The abstaining classifier, which rejects samples based on their assessment by the GEV model, provably avoids OOD and adversarial samples. The empirical validation of the approach, conducted on various neural architectures (ResNet, VGG, and Vision Transformer) and medium and large-sized datasets (CIFAR-10, CIFAR-100, and ImageNet), demonstrates its frugality, stability, and efficiency compared to the state of the art.
Fichier principal
Vignette du fichier
2501.10202v1.pdf (3.03 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04922382 , version 1 (30-01-2025)

Licence

Identifiants

  • HAL Id : hal-04922382 , version 1

Citer

Nicolas Atienza, Christophe Labreuche, Johanne Cohen, Michèle Sebag. Provably Safeguarding a Classifier from OOD and Adversarial Samples: an Extreme Value Theory Approach. ICLR 2025 - The Thirteenth International Conference on Learning Representations, Apr 2025, Singapore (SG), Singapore. ⟨hal-04922382⟩
0 Consultations
0 Téléchargements

Partager

More