Contrastive Explanation Method (CEM)
- Values: {
explainability
}
- Explanation type: { contrastive }
- Categories: { model-agnostic }
- Stage: { post-processing }
- Repository: https://github.com/IBM/Contrastive-Explanation-Method
- Tasks: { classification }
- Input data: { image }
- Licence: Apache License 2.0
- Languages: { Python }
- References:
Dhurandhar et al. support a type of contrastive explanation based on what they call pertinent negatives. A contrastive explanation answers the question: “Why P, rather than Q”?
CEM supports such an explanation by finding the minimal set of features that lead to prediction P (a pertinent positive that resembles an anchor explanation), and additionally a minimal set of features that should be absent to maintain decision P instead of the decision for closest class Q (a pertinent negative that is somewhat similar to a counterfactual ).
CEM is also implemented in the more encompassing Alibi .