github Trusted-AI/adversarial-robustness-toolbox 1.0.0
ART 1.0.0

latest releases: 1.19.1, 1.19.0, 1.18.2...
5 years ago

This is the first major release of the Adversarial Robustness 360 Toolbox (ART v1.0)!

This release generalises ART to support all possible classifier models, in addition to its existing support for neural networks. Furthermore, it generalises the label format, to accept index labels as well as one-hot encoded labels, and the input shape, to accept, for example, tabular data as input features. This release also adds new model-specific white-box and poisoning attacks and provides new methods to certify and verify the adversarial robustness of neural networks and decision tree ensembles.

Added

  • Add support for all classifiers and pipelines of scikit-learn including but not limited to LogisticRegression, SVC, LinearSVC, DecisionTreeClassifier, AdaBoostClassifier, BaggingClassifier, ExtraTreesClassifier, GradientBoostingClassifier, RandomForestClassifier, and Pipeline. (#47)

  • Add support for gradient boosted tree classifier models of XGBoost, LightGBM and CatBoost.

  • Add support for TensorFlow v2 (rc0) by introducing a new classifier TensorFlowV2Classifier providing support for eager execution and accepting callable models. KerasClassifier has been extended to provide support for TensorFlow v2 tensorflow.keras Models without eager execution. (#66)

  • Add support for models of the Gaussian Process framework GPy. (#116)

  • Add the High-Confidence-Low-Uncertainty (HCLU) adversarial example formulation as an attack on Gaussian Processes. (#116)

  • Add the Decision Tree attack as a white-box attack for decision tree classifiers (#115)

  • Add support for white-box attacks on scikit-learn’s LogisticRegression, SVC, LinerSVC, and DecisionTreeClassifier, as well as GPy and black-box attacks on all scikit-learn classifiers and XGBoost, LightGBM and CatBoost models.

  • Add Randomized Smoothing as wrapper class for neural network classifiers to provide certified adversarial robustness under the L2 norm. (#114)

  • Add the Clique Method Robustness Verification method for decision-tree-ensemble classifiers and extend it for models of XGBoost, LightGBM, and scikit-learn's ExtraTreesClassifier, GradientBoostingClassifier, RandomForestClassifier. (#124)

  • Add BlackBoxClassifier expecting only a single Python function as interface to the classifier predictions. This is the most general and versatile classifier of ART. New tutorial notebooks demonstrate BlackBoxClassifier testing the adversarial robustness of remote, deployed classifier models and of the Optical Character Recognition (OCR) engine Tesseract. (#123, #152)

  • Add the Poisoning Attack for Support Vector Machines with linear, polynomial or radial basis function kernels. (#155)

Changed

  • Introduce a new flexible API for all classifiers with an abstract base class for basic classifiers (minimal functionality to support black-box attacks), and mixins for neural networks, gradient-providing classifiers (to support white-box attacks), and decision-tree-based classifiers.

  • Update, extend and introduce new get started examples and notebook tutorials for all supported frameworks. (#47, #140)

  • Extend label format to accept index labels in addition to the already supported one-hot-encoded labels. Internally ART continues to treat labels as one-hot-encoded. This feature allows users of ART to use the label format preferred by their machine learning framework and datasets. (#126)

  • Change the order of the preprocessing steps of applying defences and standardisation/normalisation in classifiers. So far the classifiers first applied standardisation followed by defences. With this release the defences will be applied first followed by standardisation to enable comparable defence parameters across classifiers with different standardisation/normalisation parameters. (#84)

  • Use the batch_size of an attack as argument to the method predict of its classifier to reduce out-of-memory errors for large models. (#105 )

  • Generalize the classifiers of TensorFlow, Keras, PyTorch, and MXNet by removing assumptions on their output (logits or probabilities). The Boolean parameter logits has been removed from Classifier API in methods predict and class_gradient. The predictions and gradients are now computed at the output of the model without any modifications. (#50, #75, #106, #150)

  • Rename TFClassifier to TensorFlowClassifier and keep TFClassifier for backward compatibility.

Removed

  • Sunset support for Python 2 in preparation for its retirement on Jan 1, 2020. We have stopped running unittests with Python 2 and do not require new contributions to run with Python 2. We keep existing compatibility code for Python 2 and 3 where possible. (#83)

Fixed

  • Improve VirtualAdversarialMethod by making the computation of the L2 data normalisation more reliable and raising an exception if it is used with a model providing logits as output. Currently, VirtualAdversarialMethod is expecting probabilities as output. (#120, #157)

Don't miss a new adversarial-robustness-toolbox release

NewReleases is sending notifications on new releases.