By Song Bai, Yingwei Li, Yuyin Zhou, Qizhu Li, Philip H.S. Torr.
This repository contains the code of the paper Adversarial Metric Attack for Person Re-identification. It proposes adversarial metric attack, a parallel methodology to the existing adversarial classification attack. Adversarial metric attack can be used to attack metric-based systems like person re-identification and generate adversarial examples accordingly.
- Pytorch 0.4.1
- Numpy
- Python 2.7
Attack. For example, attack a ResNet-50 model trained with cross-entropy loss
python Gen_Adv.py \
--loss_type=soft \
--name=resnet_50 \
--save_img \
--save_fea
It will save the adversarial images and features.
Test.
python evaluate_adv.py \
--loss_type=soft \
--name=resnet_50
Shell in one trial. We support three attacking methods, including FGSM, I-FGSM and MI-FGSM.
sh adv.sh
The ranking list of non-targeted attack.
The ranking list of targeted attack.
If you find the code useful, please cite the following paper
@article{bai2019adversarial,
title={Adversarial Metric Attack for Person Re-identification},
author={Bai, Song and Li, Yingwei and Zhou, Yuyin and Li, Qizhu and Torr, Philip HS},
journal={arXiv preprint arXiv:1901.10650},
year={2019}
}
If you encounter any problems or have any inquiries, please contact [email protected] or [email protected]