-
Hi team, I have been able to convert my model exports into a datamart object. I notice that there are quite a few functions available to explore but I am not sure if there is something specifically on getting metrics on model performance other than AUC such as: precision, recall, f-score and information about sample size. In my experience I have used this metrics to evaluate performance of a classification model. Could you direct me where I could find these metrics or alternatively calculate these? I would also like to request to include these stats in the ADM health check report. Thank you! |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 1 reply
-
Hi @operdeck / @StijnKas, not sure if someone from team received notification for this question so tagging both of you! |
Beta Was this translation helpful? Give feedback.
-
Hi @sushantkhare, apologies for the delayed response. You're right, we don't provide these methods out of the box in pdstools, as we consider AUC to be the most appropriate metric for evaluating our adaptive models. Of course it's definitely possible to calculate them, and we use them internally to recalculate AUC here: https://github.com/pegasystems/pega-datascientist-tools/blob/master/python/pdstools/utils/cdh_utils.py#L302. You can reference this implementation if you want to derive your own functions. In the R health check we've already included the PR (Precision/Recall) AUC as well, and we're considering adding them to the python health check too. Is there a particular reason you mention precision/recall/f-score should be included in the health check? As of right now we don't have plans to include them, but if there's a valid use case for it we can always reconsider that! I hope that helps, else let us know! |
Beta Was this translation helpful? Give feedback.
Hi @sushantkhare, apologies for the delayed response.
You're right, we don't provide these methods out of the box in pdstools, as we consider AUC to be the most appropriate metric for evaluating our adaptive models. Of course it's definitely possible to calculate them, and we use them internally to recalculate AUC here: https://github.com/pegasystems/pega-datascientist-tools/blob/master/python/pdstools/utils/cdh_utils.py#L302. You can reference this implementation if you want to derive your own functions.
In the R health check we've already included the PR (Precision/Recall) AUC as well, and we're considering adding them to the python health check too. Is there a particular reason you men…