Security

Security focuses on tools for mitigating AI-specific risks, such as tools that promote robustness against adversarial attacks.

For example, it is possible to create adversarial examples by finding perturbations that maximize the prediction error and then applying these perturbations to input images in such a way that 1) a human does not or barely see the difference but 2) the neural network misclassifies the input. If the classifications of the neural network have impact, then this type of attack needs to be countered for safe usage of the model.

Security is important for the safety of AI, but safety may be construed as a broader category that also involves explainability or fairness in the sense of “protection against harm”.

AdvBox

Advbox offers a number of AI model security toolkits. AdversialBox allows zero-coding generation of adversial examples for a wide range of neural network frameworks. An overview of the supported attacks and defenses can be found here and the corresponding code here. It requires some effort to find all attacks mentioned on the homepage in the code base. Generally speaking, the documentation of AdvBox is incomplete and not very user-friendly. ODD: Object Detector Deception showcases a specific attack for object detection networks such as YOLO, but is not mentioned in the README. Read more...

Alibi Detect

Alibi Detect is an open source Python library (sister library to Alibi ) focused detecting outliers, adversarial examples, and concept drift. Finding adversarial examples is relevant for assessing the security of machine learning models. Machine learning models learn complex statistical patterns in datasets. If these statistical patterns “drift” (in unforeseen ways) after a model is deployed, this will decrease the model performance over time. In systems where model predictions have an impact on people, this may be a threat to the fairness of the predictions. Read more...

ART: Adversial Robustness 360 Toolbox

The Adversial Robustness Toolbox (ART) is the first comprehensive toolbox that unifies many defensive techniques for four categories of adversarial attacks on machine learning models. These categories are model evasion, model poisoning, model extraction and inference (e.g. inference of sensitive attributes in the training data; or determining whether an example was part of the training data). ART supports all popular machine learning frameworks, all data types and all machine learning tasks. Read more...

CleverHans

CleverHans is a Python library with the main purpose of providing good reference implementations of attacks for benchmarking machine learning models against adversarial examples. The main maintainers of this library are Ian Goodfellow and Nicolas Papernot. Attacks (i.e. methods for generating adversarial examples) are listed under /cleverhans and each of the supported frameworks has its own folder with attack implementations. CleverHans also aims to implement a set of defenses, but this is currently work in progress (currently there is only a defense implementation for PyTorch). Read more...

Foolbox

Foolbox is a comprehensive adversarial library for attacking machine learning models, with a focus on neural networks in computer vision. At the moment of writing FoolBox contains 41 gradient-based and decision-based adversarial attacks, making it the second biggest adversial library after ART . A notable difference with ART is that Foolbox only contains attacks, but no defenses and evaluation metrics. The library is very user-friendly, with a clear API and documentation. Read more...

OpenMined (PySyft)

The OpenMined community is a collaboration of several organizations, including TensorFlow, PyTorch and Keras, to create an open-source ecosystem of privacy tools that extend libraries such as PyTorch with cryptographic techniques and differential privacy. The aim is to contribute to the adaptation of privacy-preserving AI. To this end, OpenMined offers several privacy-preserving tools on their github. A main tool is PySyft, which allows “computing on data you do not own and cannot see”. Read more...