The classical plot used for explainability is the "force plot" from SHAP library:
However this plot contains too much information.
I believe that it's better to focus on the "few vital" features that make the observation risky.
This could be done through a parallel plot:
Reference: https://towardsdatascience.com/bring-explainable-ai-to-the-next-level-by-finding-the-few-vital-causes-4838d46857de
-
Notifications
You must be signed in to change notification settings - Fork 2
smazzanti/few_vital_causes
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
No description, website, or topics provided.
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published