Skip to content

iPRoBe-lab/ADV-GEN-IrisPAD

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 

Repository files navigation

A Parametric Approach to Adversarial Augmentation for Cross-Domain Iris Presentation Attack Detection

Abstract: Iris-based biometric systems are vulnerable to presentation attacks (PAs), where adversaries present physical artifacts (e.g., printed iris images, textured contact lenses) to defeat the system. This has led to the development of various presentation attack detection (PAD) algorithms, which typically perform well in intra-domain settings. However, they often struggle to generalize effectively in cross-domain scenarios, where training and testing employ different sensors, PA instruments, and datasets. In this work, we use adversarial training samples of both bonafide irides and PAs to improve the cross-domain performance of a PAD classifier. The novelty of our approach lies in leveraging transformation parameters from classical data augmentation schemes (e.g., translation, rotation) to generate adversarial samples. We achieve this through a convolutional autoencoder, ADV-GEN, that inputs original training samples along with a set of geometric and photometric transformations. The transformation parameters act as regularization variables, guiding ADV-GEN to generate adversarial samples in a constrained search space. Experiments conducted on the LivDet-Iris 2017 database, comprising four datasets, and the LivDet-Iris 2020 dataset, demonstrate the efficacy of our proposed method.

Link to paper: https://arxiv.org/pdf/2412.07199

ADV_GEN Framework

A convolutional autoencoder (CAE) is trained using the original iris images including both bonafides and PAs, along with trasformation parameter vectors, to generate synthetic adversarial images. A transformation paratmeter vector consists of both geometric and photometric transformations such as translation, rotation, shear, solarize, scale, sharpness, brightness, and contrast. Reconstruction loss (Mean Squared Error) is used to ensure similarity between generated transformed iris images and transformed original images. To make the generated transformed images adversarial, these are passed through Standard PAD classifier (trained with original images) to obtain PA scores and an adversarial loss (BCE loss between these PA scores and the inverted class labels maximizing the misclassification error) is backpropagated to the CAE, forcing the generated images to be misclassified by the Standard PAD classifier.
Image not available.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages