Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix flaky test_assisted_decoding_matches_greedy_search #35951

Merged

Conversation

ydshieh
Copy link
Collaborator

@ydshieh ydshieh commented Jan 29, 2025

What does this PR do?

I just used @gante method to fix this flakyness, you are the best!

(run 1000 times)
previously: 1% failures
this PR: 0%

@ydshieh ydshieh requested review from zucchini-nlp and gante January 29, 2025 09:13
@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Copy link
Member

@zucchini-nlp zucchini-nlp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool thanks! I dont remember now correctly, was there a reason for not adding this VLM processor to all generation tests at first? I see some still don't have it, like assisted_generation_sample

Joao, I think we're getting more reasons to make default generation config for models, VLMs will be very very happy

@ydshieh
Copy link
Collaborator Author

ydshieh commented Jan 29, 2025

was there a reason for not adding this VLM processor

Question @gante , but I guess it's just we got focus on the failed tests and fixed them and didn't try to do it for all tests that were not failing at that time.

@zucchini-nlp
Copy link
Member

I guess it won't hurt to add it for all models, since it will anyway fail for some VLMs, existing or new. If Joao agrees ofc

@ydshieh ydshieh force-pushed the fix_flaky_test_assisted_decoding_matches_greedy_search_2 branch from 064c51b to 94853bb Compare January 29, 2025 13:35
@ydshieh
Copy link
Collaborator Author

ydshieh commented Jan 29, 2025

Let me merge this one first, and wait @gante 's input for further changes in a separate PR.

@ydshieh ydshieh merged commit cf90404 into main Jan 29, 2025
26 checks passed
@ydshieh ydshieh deleted the fix_flaky_test_assisted_decoding_matches_greedy_search_2 branch January 29, 2025 13:50
@gante
Copy link
Member

gante commented Jan 29, 2025

@zucchini-nlp @ydshieh Yup, we should make models have their own generation config with custom defaults.

But this won't be an easy change, btw 😅 Possibly each model would need a function in its PreTrainedModel class

@ydshieh
Copy link
Collaborator Author

ydshieh commented Jan 29, 2025

@gante I think @zucchini-nlp only means applies the same changes you and me made (like in this PR) to all generation tests, and not saying change anything in the non-testing code.

@gante
Copy link
Member

gante commented Jan 29, 2025

Ah, I see.

But we should fix the underlying issue in the models too :D

@zucchini-nlp
Copy link
Member

applies the same changes to all generation tests,

Yep, exactly what I meant, as a better workaround until the default generation config is added. It is quite annoying otherwise to see new flakiness when a new VLM is added

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants