Should progressive_val_score use a default metric? #965
MaxHalford
started this conversation in
Polls
Replies: 1 comment 2 replies
-
I agree with the pros and cons. Before I vote, do you have an idea of how would we document the entry for |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
The
progressive_val_score
function in theevaluate
model implements progressive validation. It's super useful for assessing the performance of a model on a dataset. If you want to inspect the model and the performance at each step, then you have theiter_progressive_val_score
function. If you want to go even deeper and look at the data at each step, then you have to code your own loop. You can leveragestream.simulate_qa
to ease things. I think this is a great of state of things. We have both one-liners and flexible tools to evaluate models.In the spirit of making user's lives simple, how about we use default metrics in the
progressive_val_score
anditer_progressive_val_score
methods? For instance, if the user omits themetric
parameter, then we usemetrics.ClassificationReport
by default. In case of an anomaly detector, we could usemetrics.ROCAUC
.The big pro for me is that it educates users as to what metrics are good to pick by default. It makes you fall in the pit of success. Experimented users will know what metrics they want anyway, and power users will find a way to code their own metric and might not even use
progressive_val_score
, preferring to code the loop themselves.The con is that we do more hand-holding, by removing the need for users to take a look at the metrics documentation.
@online-ml/devs could you vote on this? :)
3 votes ·
Beta Was this translation helpful? Give feedback.
All reactions