You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, by combining the code and your paper, I have the following questions(about vit_ csra):
In the code, the class token is not used in the input of the last CSRA module, so why set the class token in the code in "VIT_CSRA".
Has the last MLP head used for classification in the vision transformer been deleted directly?
The text was updated successfully, but these errors were encountered:
"why set the class token in the code in "VIT_CSRA" ": setting class token at the beginning is the original structure of VIT, which is not in the range of our modification. What we do is to fit CSRA into the VIT models (e.g. use all the final patch embeddings instead of the final one single class token).
"Has the last MLP head used for classification in the vision transformer been deleted directly?", Yes. We use our CSRA module instead the original classification head.
Hello, by combining the code and your paper, I have the following questions(about vit_ csra):
In the code, the class token is not used in the input of the last CSRA module, so why set the class token in the code in "VIT_CSRA".
Has the last MLP head used for classification in the vision transformer been deleted directly?
The text was updated successfully, but these errors were encountered: