# InterpretML

- Values: {
explainability
}
- Explanation type: { white box Shapley value partial dependence plot sensitivity analysis }

- Categories: { model-agnostic model-specific }
- Stage: { in-processing post-processing }
- Repository: https://github.com/interpretml/interpret
- Tasks: { classification regression }
- Input data: { tabular text image }
- Licence: MIT
- Languages: { Python }
- References:

The InterpretML toolkit, developed at Microsoft, can be decomposed in two major components:

- A set of interpretable “glassbox” models
- Techniques for explaining black box systems.

W.r.t. 1, InterpretML particularly contains a new interpretable “glassbox” model that combines Generalized Additive Models (GAMs) with machine learning techniques such as gradient boosted trees, called an *Explainable Boosting Machine*.

Other than this new interpretable model, the main utility of InterpretML is to unify existing explainability techniques under a single API.

Interpret-Text is an extension of InterpretML to support various text models.

## Glassbox models

- Explainable Boosting Machine
- Decision tree
- Decision rule list
- Linear/logistic regression

## Blackbox explainers

- SHAP kernel explainer
- SHAP tree explainer
- LIME
- Morris sensitivity analysis
- Partial dependence plots

So this package contains both model-agnostic and model-specific explainers.