Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Register adversarially trained backbones in timm #2509

Open
wants to merge 10 commits into
base: main
Choose a base branch
from

Conversation

mzweilin
Copy link
Contributor

📝 Description

This PR registers 48 model weights from adversarial training in timm. Users can specify adversarially trained backbones with names like resnet18.adv_l2_0.1 and wide_resnet50_2.adv_linf_1. We remove the support to the __AT__ token in the backbone string to conform with the timm format.

Here're the all available weights:

model_names = ["resnet18", "resnet50", "wide_resnet50_2"]
l2_epsilons = [0, 0.01, 0.03, 0.05, 0.1, 0.25, 0.5, 1, 3, 5]
linf_epsilons = [0, 0.5, 1, 2, 4, 8]

The model weights are derived from https://huggingface.co/madrylab/robust-imagenet-models

Here is a simple test to verify that the model weights are loaded:

import torch
from anomalib.models.components.feature_extractors.timm import TimmFeatureExtractor

weights = torch.hub.load_state_dict_from_url("https://huggingface.co/mzweilin/robust-imagenet-models/resolve/main/wide_resnet50_2_l2_eps5.pth")
backbone = "wide_resnet50_2.adv_l2_5"

model =  TimmFeatureExtractor(backbone=backbone, layers=[], pre_trained=True)
assert torch.all(weights["conv1.weight"] == model.feature_extractor.conv1.weight)

✨ Changes

Select what type of change your PR is:

  • 🐞 Bug fix (non-breaking change which fixes an issue)
  • 🔨 Refactor (non-breaking change which refactors the code base)
  • 🚀 New feature (non-breaking change which adds functionality)
  • 💥 Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • 📚 Documentation update
  • 🔒 Security update

✅ Checklist

Before you submit your pull request, please make sure you have completed the following steps:

  • 📋 I have summarized my changes in the CHANGELOG and followed the guidelines for my type of change (skip for minor changes, documentation updates, and test enhancements).
  • 📚 I have made the necessary updates to the documentation (if applicable).
  • 🧪 I have written tests that support my changes and prove that my fix is effective or my feature works (if applicable).

For more information about code review checklists, see the Code Review Checklist.

…ples/notebooks/100_datamodules/101_btech.ipynb,examples/notebooks/100_datamodules/102_mvtec.ipynb,examples/notebooks/100_datamodules/103_folder.ipynb,examples/notebooks/100_datamodules/104_tiling.ipynb,examples/notebooks/200_models/201_fastflow.ipynb,examples/notebooks/400_openvino/401_nncf.ipynb,examples/notebooks/500_use_cases/501_dobot/501b_inference_with_a_robotic_arm.ipynb,examples/notebooks/600_loggers/601_mlflow_logging.ipynb,examples/notebooks/700_metrics/701a_aupimo.ipynb,examples/notebooks/700_metrics/701b_aupimo_advanced_i.ipynb,examples/notebooks/700_metrics/701c_aupimo_advanced_ii.ipynb,examples/notebooks/700_metrics/701d_aupimo_advanced_iii.ipynb,examples/notebooks/700_metrics/701e_aupimo_advanced_iv.ipynb: convert to Git LFS

Signed-off-by: Weilin Xu <[email protected]>
Signed-off-by: Weilin Xu <[email protected]>


# We will register model weights only once even if we import the module repeatedly, because it is a singleton.
try_register_in_bulk()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Was this intentional or is this line a leftover?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is intentional. If we want to make the extra weights available in Anomalib, we need to register them in timm. We try to catch the error here, because the internal functions of timm may change over time.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have the feeling that it would make sense to let the user register new models..
There could be an documentation about adversarially trained backbones available but is it really in the scope of the library to provide all sorts of "special" models?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree. We should design a proper mechanism to register new models. Personally, I am not in favour of calling methods unless explicitly specified. In this case, try_register_in_bulk() will be executed on an import call like from anomalib.models.components import TimmFeatureExtractor. Ideally if the users specify the adversarially trained backbone then we should register the model to timm and create an instance with it. This can easily be done from within the class.

@mzweilin
Copy link
Contributor Author

@ashwinvaidya17 One check failed because of an irrelevant issue:

examples/notebooks/700_metrics/701e_aupimo_advanced_iv.ipynb:cell_19:85:34: PT028 Test function parameter `min_abs_diff` has default argument

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants