-
Notifications
You must be signed in to change notification settings - Fork 709
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Register adversarially trained backbones in timm
#2509
base: main
Are you sure you want to change the base?
Register adversarially trained backbones in timm
#2509
Conversation
…ples/notebooks/100_datamodules/101_btech.ipynb,examples/notebooks/100_datamodules/102_mvtec.ipynb,examples/notebooks/100_datamodules/103_folder.ipynb,examples/notebooks/100_datamodules/104_tiling.ipynb,examples/notebooks/200_models/201_fastflow.ipynb,examples/notebooks/400_openvino/401_nncf.ipynb,examples/notebooks/500_use_cases/501_dobot/501b_inference_with_a_robotic_arm.ipynb,examples/notebooks/600_loggers/601_mlflow_logging.ipynb,examples/notebooks/700_metrics/701a_aupimo.ipynb,examples/notebooks/700_metrics/701b_aupimo_advanced_i.ipynb,examples/notebooks/700_metrics/701c_aupimo_advanced_ii.ipynb,examples/notebooks/700_metrics/701d_aupimo_advanced_iii.ipynb,examples/notebooks/700_metrics/701e_aupimo_advanced_iv.ipynb: convert to Git LFS Signed-off-by: Weilin Xu <[email protected]>
Signed-off-by: Weilin Xu <[email protected]>
Signed-off-by: Weilin Xu <[email protected]>
Signed-off-by: Weilin Xu <[email protected]>
Signed-off-by: Weilin Xu <[email protected]>
Signed-off-by: Weilin Xu <[email protected]>
Signed-off-by: Weilin Xu <[email protected]>
Signed-off-by: Weilin Xu <[email protected]>
|
||
|
||
# We will register model weights only once even if we import the module repeatedly, because it is a singleton. | ||
try_register_in_bulk() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Was this intentional or is this line a leftover?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is intentional. If we want to make the extra weights available in Anomalib, we need to register them in timm. We try to catch the error here, because the internal functions of timm may change over time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have the feeling that it would make sense to let the user register new models..
There could be an documentation about adversarially trained backbones available but is it really in the scope of the library to provide all sorts of "special" models?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree. We should design a proper mechanism to register new models. Personally, I am not in favour of calling methods unless explicitly specified. In this case, try_register_in_bulk()
will be executed on an import call like from anomalib.models.components import TimmFeatureExtractor
. Ideally if the users specify the adversarially trained backbone then we should register the model to timm and create an instance with it. This can easily be done from within the class.
@ashwinvaidya17 One check failed because of an irrelevant issue:
|
📝 Description
This PR registers 48 model weights from adversarial training in
timm
. Users can specify adversarially trained backbones with names likeresnet18.adv_l2_0.1
andwide_resnet50_2.adv_linf_1
. We remove the support to the__AT__
token in the backbone string to conform with the timm format.Here're the all available weights:
The model weights are derived from https://huggingface.co/madrylab/robust-imagenet-models
Here is a simple test to verify that the model weights are loaded:
✨ Changes
Select what type of change your PR is:
✅ Checklist
Before you submit your pull request, please make sure you have completed the following steps:
For more information about code review checklists, see the Code Review Checklist.