Alibi Detect


Alibi Detect is an open source Python library (sister library to Alibi ) focused detecting outliers, adversarial examples, and concept drift.

Finding adversarial examples is relevant for assessing the security of machine learning models. Machine learning models learn complex statistical patterns in datasets. If these statistical patterns “drift” (in unforeseen ways) after a model is deployed, this will decrease the model performance over time. In systems where model predictions have an impact on people, this may be a threat to the fairness of the predictions.

For each of these detection problems Alibi Detect supports a broad array of methods. The README shows tables with these methods' characteristics, for outlier detection, adversarial detection, and drift detection. Paper references for all supported methods are also provided.

For drift detection methods, TensorFlow and PyTorch are supported.