We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rerunning an evaluation with a new metric requires rerunning the entire inference currently, which can be very costly.
It would be great, if we could specify a details file containing the predictions and use that to compute more metrics on.
The text was updated successfully, but these errors were encountered:
@clefourrier @NathanHB I am happy to implement this. Do you have suggestions for how to best solve this?
Sorry, something went wrong.
It would be great ! I think the best way would be to recreate the sample_id_to_response from the details file and run the metric on these.
sample_id_to_response
from the pipeline.py file:
sample_id_to_responses = self._run_model() self._compute_metrics(sample_id_to_responses)
you would need to inspect what is in sample_id_to_response and try to make it from the details file.
Great, will try that, thanks Nathan!
Successfully merging a pull request may close this issue.
Issue encountered
Rerunning an evaluation with a new metric requires rerunning the entire inference currently, which can be very costly.
Solution/Feature
It would be great, if we could specify a details file containing the predictions and use that to compute more metrics on.
The text was updated successfully, but these errors were encountered: