You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I get one question on the source in diffusers and in this repo.
Seams in diffusers, each image embedding tokens are processed one-by-one in attention module.
attention_processor.py
this operation means ip-attention is calculated on the first image token, then update the hidden_states. then the second one and so on.
Seams quite strange.
while in the original repo, attention_processor.py all image tokens are processed in one attention calculation op, that's the way I think correct.
So could you help to confirm if this is an issue or I made a mistake. Thanks so much
The text was updated successfully, but these errors were encountered:
hi ,@xiaohu2015
I get one question on the source in diffusers and in this repo.
Seams in diffusers, each image embedding tokens are processed one-by-one in attention module.
attention_processor.py
this operation means ip-attention is calculated on the first image token, then update the hidden_states. then the second one and so on.
Seams quite strange.
while in the original repo,
attention_processor.py
all image tokens are processed in one attention calculation op, that's the way I think correct.
So could you help to confirm if this is an issue or I made a mistake. Thanks so much
The text was updated successfully, but these errors were encountered: