Driverless AI provides robust interpretability of machine learning models to explain modeling results in a human-readable format. In the Machine Learning Interpetability (MLI) view, Driverless AI employs a host of different techniques and methodologies for interpreting and explaining the results of its models.
The set of MLI techniques and methodologies can be extended with recipes. With MLI BYOR, you can use your own recipes in combination with or instead of all built-in recipes. This allows you to further extend MLI explainers in addition to out-of-the-box techniques.
Custom explainer recipes can be uploaded into Driverless AI at runtime without having to restart the platform, just like a plugin.
For security, safety and performance best practices please refer to developer guide
- Tutorials:
- Notebooks:
- Documentation:
IID explainers:
Explainer templates:
- Feature importance explainer template
- PD/ICE explainer template
- Decision tree explainer template
- Markdown report explainer template
- Markdown report with Vega charts template
- Markdown feature importance summary explainer template
- Scatter plot explainer template
Explainer examples: