AdvBox
- Values: { security }
- Categories: { model-specific }
- Stage: { in-processing post-processing }
- Repository: https://github.com/advboxes/AdvBox/blob/master/adversarialbox.md
- Tasks: { classification }
- Input data: { image }
- Licence: Apache License 2.0
- Languages: { Python }
- Frameworks: { PaddlePaddle PyTorch Caffe2 MxNet Keras TensorFlow }
- References:
Advbox
offers a number of AI model security toolkits.
AdversialBox
allows zero-coding generation of adversial examples for a wide range of neural network frameworks.
An overview of the supported attacks and defenses can be found here and the corresponding code here.
It requires some effort to find all attacks mentioned on the homepage in the code base.
Generally speaking, the documentation of AdvBox
is incomplete and not very user-friendly.
ODD: Object Detector Deception
showcases a specific attack for object detection networks such as YOLO, but is not mentioned in the README.
AdvDetect
is named in the README, but it’s not clear if and how it differs from ODD
.
The “homepage” for AdvDetect
is an empty markdown file at the moment of writing.
AdvPoison
contains an example of a data poisoning attack for MNIST, implemented in Paddle and PyTorch.
The repository does contain a wide range of tutorials.
This library is said to be based on Foolbox , which is more comprehensive and has proper documentation.