-
-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update for SAM 2.1 #10
base: main
Are you sure you want to change the base?
Conversation
Also I've added in a fix for a mistaken dependency on pytest–since the Reproduce:
Test Fix:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are these new functions used somewhere? This fork was created with the aim to be a minimal version of the original codebase.
@@ -245,9 +240,7 @@ def forward(self, q: Tensor, k: Tensor, v: Tensor) -> Tensor: | |||
|
|||
dropout_p = self.dropout_p if self.training else 0.0 | |||
# Attention | |||
|
|||
with torch.nn.attention.sdpa_kernel(get_sdp_backends(dropout_p)): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any particular reason for removing this ? I created the get_sdp_backends
function to support the newer backends if available on the system being used.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Huh this is interesting, I used ruff
initially for sorting imports, it shouldn't re-organise like this.
self.only_obj_ptrs_in_the_past_for_eval = only_obj_ptrs_in_the_past_for_eval | ||
|
||
# Part 2: memory attention to condition current frame's visual features | ||
# with memories (and obj ptrs) from past frames | ||
self.memory_attention = memory_attention | ||
self.hidden_dim = memory_attention.d_model | ||
self.hidden_dim = image_encoder.neck.d_model |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe I'm being pedantic but just to confirm, this runs okay during forward inference? I don't see a .neck
submodule.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @drien thanks a lot for contributing ❤️ . Left a couple of review notes ☕
This brings in the code changes and configs for the 2.1 updates directly from the original sam2 repo. Everything seems to work as expected with both of the 2 and 2.1 models. I just updated the
variants
to have a version prefix so there aren't breaking changes for users who've already integrated this with the 2.0 version.I duplicated some of the tests to run with the 2.1 models as well and everything passes with both versions, but I've only done real-world testing using the 2.1 checkpoints.