Added
-
Added a new module
art.evaluations
for evaluation tools that go beyond creating adversarial examples and create insights into the robustness of machine learning models beyond adversarial accuracy and build onart.estimators
andart.attacks
as much as possible. The first implemented evaluation tool isart.evaluations.SecurityCurve
which calculates the security curve, a popular tool to evaluate robustness against evasion, usingart.attacks.evasion.ProjectedGradientDescent
and provides evaluation of potential gradient masking in the evaluated model. (#654) -
Added support for perturbation masks in
art.attacks.evasion.AutoProjectedGradientDescent
similar as inart.attacks.evasion.ProjectedGradientDescent
and added Boolean masks for patch location sampling inDpatch
and allAdversarialPatch
attacks to enable pixel masks defining regions where patch locations are sampled from during patch training or where trained patches can be applied. -
Added preprocessing for Infinite (IIR) and Finite Impulse Response (FIR) filtering for Room Acoustics Modelling in framework-agnostic (
art.preprocessing.audio.LFilter
) and PyTorch-specific (art.preprocessing.audio.LFilterPyTorch
) implementations as the first tool for physical environment simulation for audio data inart.preprocessing.audio
. Additional tools will be added in future releases. (#744) -
Added Expectation over Transformation (EoT) to
art.preprocessing.expectation_over_transformation
with a first implementation of sampling image rotation for classification tasks framework-specific for TensorFlow v2 (art.preprocessing.expectation_over_transformation.EOTImageRotationTensorFlowV2
) providing full support for gradient backpropagation through EoT. Additional EoTs will be added in future releases. (#744) -
Added support for multi-modal inputs in
ProjectedGradientDescent
attacks andFastGradientMethod
attack with broadcastable argumentseps
andeps_step
asnp.ndarray
to enable attacks against, for example, images with multi-modal color channels. (#691) -
Added Database Reconstruction attack in the new module
art.attacks.inference.reconstruction.DatabaseReconstruction
enabling evaluation of the privacy of machine learning models by reconstructing one removed sample of the training dataset. The attack is demonstrated in a new notebook on models trained non-privately and with differential privacy using the Differential Privacy Library (DiffPrivLib) as defense. (#759) -
Added support for one-hot encoded feature definition in black-box attribute inference attacks. (#768)
-
Added a new model-specific speech recognition estimator for Lingvo ASR in
art.estimators.speech_recognition.TensorFlowLingvoASR
. (#584) -
Added a framework-independent implementation of the Imperceptible ASR attack with loss support for TensorFlow and PyTorch in
art.attacks.evasion.ImperceptibleASR
. (#719, #760) -
Added Clean Label Backdoor poisoning attack in
art.attacks.poisoning.PoisoningAttackCleanLabelBackdoor
. (#725) -
Added Strong Intentional Perturbation (STRIP) defense against poisoning attacks in
art.defences..transformer.poisoning.STRIP
. (#656) -
Added Label-only Boundary Distance Attack
art.attacks.inference.membership_inference.LabelOnlyDecisionBoundary
and Label-only Gap Attackart.attacks.inference.membership_inference.LabelOnlyGapAttack
for membership inference attacks on classification estimators. (#720) -
Added support for preprocessing and preprocessing defences in the PyTorch-specific implementation of the Imperceptible ASR attack in
art.attacks.evasion.ImperceptibleASRPyTorch
. (#763) -
Added a robust version of evasion attack DPatch in
art.attacks.evasion.RobustDPatch
against object detectors by adding improvements like expectation over transformation steps, fixed patch location, etc. (#751) -
Added optional support for Automatic Mixed Precision (AMP) in
art.estimators.classification.PyTochClassifier
to facilitate mix-precision computations and increase performance. (#619) -
Added the Brendel & Bethge evasion attack in
art.attacks.evasion.BrendelBethgeAttack
based on the original reference implementation. (#626) -
Added framework-agnostic support for Randomized Smoothing estimators in addition to framework-specific implementations in TensorFlow v2 and PyTorch. (#738)
-
Added an optional progress bar to
art.utils.get_file
to facilitate downloading large files. (#698) -
Added support for perturbation masks in HopSkipJump evasion attack in
art.attacks.evasion.HopSkipJump
. (#653)
Changed
-
Changed preprocessing defenses and input standardisation with mean and standard deviation by combining all preprocessing into a single preprocessing API defined in the new module
art.preprocessing
. Existing preprocessing defenses remain inart.defences.preprocessor
, but are treated as equal and run with the same API and code as general preprocessing tools inart.preprocessing
. The standardisation is now a preprocessing tool that is implemented framework-specific for PyTorch and TensorFlow v2 in forward and backward direction. Estimators forart.estimators.classification
andart.estimators.object_detection
in TensorFlow v2 and PyTorch set up with all framework-specific preprocessing steps will prepend the preprocessing directly to the model to evaluate output and backpropagate gradients in a single step through the model and (chained) preprocessing instead of previously two separate steps for improved performance. Framework independent preprocessing tools will continue to be evaluated in a step separate from the model. This change also enable full support for any model-specific standardisation/normalisation functions for the model inputs and their gradients. (#629) -
Changed
Preprocessor
andPostprocessor
APIs to simplify them by defining reused methods and the most common property values as defaults in the API. The default forart.defences.preprocessor.preprocessor.Preprocessor.estimate_gradient
in framework-agnostic preprocessing is Backward Pass Differentiable Approximation (BPDA) with identity function, which can be customized with accurate or better approximations by implementingestimate_gradient
. (#752) -
Changed random restarts in all
ProjectedGradientDescent
implementations to collect the successful adversarial examples of each random restart instead of previously only keeping the adversarial examples of the most successful random restart. Adversarial examples of previous random restart iterations are overwritten by adversarial examples of later random restart iterations. This leads to equal or better adversarial accuracies compared to previous releases and changes the order of processing the input samples to first complete all random restarts of a batch before processing the next batch instead of looping over all batches in each random restart. (#765) -
Changed order of mask application and normalization of the perturbation in all
ProjectedGradientDescent
andFastGradientMethod
attacks to now first apply the mask to theloss_gradients
and subsequently normalize only the remaining, un-masked perturbation. That way the resulting perturbation can directly be compared to the attack budgeteps
. (#711) -
Changed location of implementation and default values of properties
channels_first
,clip_values
, andinput_shape
inart.estimators
to facilitate the creation of customs estimators not present inart.estimators
. -
Changed Spectral Signature Defense by removing argument
num_classes
and replacing it with the estimator’snb_classes
property and renaming parameterub_pct_poison
toexpected_pp_poison
. (#678) -
Changed the ART directory path for datasets and model data stored in
ART_DATA_PATH
to be configurable after importing ART. (#701) -
Changed preprocessing defence
art.defences.preprocessor.JpegCompression
to support any number of channels in addition to the already supported inputs with 1 and 3 channels. (#700) -
Changed calculation of perturbation and direction in
art.attacks.evasion.BoundaryAttack
to follow the reference implementation. These changes result in faster convergence and smaller perturbations. (#761)
Removed
[None]
Fixed
-
Fixed bug in definition and application of norm
p
in cost matrix in Wasserstein evasion attackart.attacks.evasion.Wasserstein
present in the reference implementation. (#712) -
Fixed handling of fractional batches in Zeroth Order Optimization (ZOO) attack in
art.attacks.evasion.ZOOAttack
to prevent errors caused by shape mismatches for batches smaller thanbatch_size
. (#755)