Alibi
- Values: {
explainability
}
- Explanation type: { global surrogate local surrogate example-based Shapley value anchor contrastive counterfactual ALE gradient-based }
- Categories: { model-specific model-agnostic }
- Stage: { post-processing }
- Repository: https://github.com/SeldonIO/alibi
- Tasks: { classification regression }
- Input data: { image text tabular }
- Licence: Apache License 2.0
- Languages: { Python }
- Frameworks: { Keras }
Alibi is an open-source Python library that supports various interpretability techniques and a broad array of explanation types. The README already provides an overview of the supported methods and when they are applicable. The following table with supported methods is copied from the README (slightly abbreviated):
Supported methods
Method | Models | Explanations | Classification | Regression | Tabular | Text | Images | Categorical features |
---|---|---|---|---|---|---|---|---|
ALE | BB | global | ✔ | ✔ | ✔ | |||
Anchors | BB | local | ✔ | ✔ | ✔ | ✔ | ✔ | |
CEM | BB* TF/Keras | local | ✔ | ✔ | ✔ | |||
Counterfactuals | BB* TF/Keras | local | ✔ | ✔ | ✔ | |||
Prototype Counterfactuals | BB* TF/Keras | local | ✔ | ✔ | ✔ | ✔ | ||
Integrated Gradients | TF/Keras | local | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
Kernel SHAP | BB | local global | ✔ | ✔ | ✔ | ✔ | ||
Tree SHAP | WB | local global | ✔ | ✔ | ✔ | ✔ |
The README also explains the keys:
- BB - black-box (only require a prediction function)
- BB* - black-box but assume model is differentiable
- WB - requires white-box model access. There may be limitations on models supported
- TF/Keras - TensorFlow models via the Keras API
- Local - instance specific explanation, why was this prediction made?
- Global - explains the model with respect to a set of instances
For more detailed information on the supported methods, see the algorithm overview.