Foolbox is a comprehensive adversarial library for attacking machine learning models, with a focus on neural networks in computer vision.
At the moment of writing FoolBox contains 41 gradient-based and decision-based adversarial attacks, making it the second biggest adversial library after
A notable difference with ART is that Foolbox only contains attacks, but no defenses and evaluation metrics.
The library is very user-friendly, with a clear API and documentation.
Foolbox has dedicated classes to wrap around
JAX models, e.g.
fb.PyTorchModel(model, bounds=bounds, preprocessing=preprocessing) where
model is a
FoolBox model can then be passed into an attack of choice.
This clear API makes it possible to easily experiment with many adversarial attacks.
Foolbox is built on
EagerPy which allows Python code to run natively in the supported frameworks, e.g. in PyTorch.
I take this to mean that Foolbox will not support libraries unless they are supported by
EagerPy as well.
Foolbox also comes with extensive type annotations, i.e. the library is up to speed with the latest Python 3 conventions.