Alibi Detect
- Values: {
security
fairness
}
- Fairness type: { group fairness }
- Categories: { model-agnostic }
- Stage: { design phase }
- Repository: https://github.com/SeldonIO/alibi-detect
- Tasks: { classification }
- Input data: { text image tabular time series }
- Licence: Apache License 2.0
- Languages: { Python }
- Frameworks: { TensorFlow PyTorch }
Alibi Detect is an open source Python library (sister library to Alibi ) focused detecting outliers, adversarial examples, and concept drift.
Finding adversarial examples is relevant for assessing the security of machine learning models. Machine learning models learn complex statistical patterns in datasets. If these statistical patterns “drift” (in unforeseen ways) after a model is deployed, this will decrease the model performance over time. In systems where model predictions have an impact on people, this may be a threat to the fairness of the predictions.
For each of these detection problems Alibi Detect supports a broad array of methods. The README shows tables with these methods' characteristics, for outlier detection, adversarial detection, and drift detection. Paper references for all supported methods are also provided.
For drift detection methods, TensorFlow and PyTorch are supported.