TUM Logo

Evaluating defense strategies for facial recognition systems

Evaluating defense strategies for facial recognition systems

Supervisor(s): Ching-Yu Kao
Status: finished
Topic: Others
Author: Hendrik Pauthner
Submission: 2019-01-15
Type of Thesis: Bachelorthesis
Thesis topic in co-operation with the Fraunhofer Institute for Applied and Integrated Security AISEC, Garching

Description

Despite the fact that deep learning systems achieve even superhuman performance
on selected perceptual tasks, researchers have recently demonstrated that they are
not infallible. Images with methodically crafted perturbations, also called adversarial
examples, can deceive these systems and cause misclassification. This is particularly
problematic for face recognition systems because they are increasingly used in security-
critical domains like access control or camera surveillance. They are particularly
vulnerable to an attack proposed by Sharif et al. that is based on perturbed eyeglass
frames which can be worn to mislead the classifying model. As for now, most research
on adversarial defense is geared towards detecting adversarial perturbations invisible
for human eyes. However, this task poses different challenges to the defense than
handling attacks that are based on visible adversarial perturbations like the eyeglasses
attack by Sharif et al. which underlines the importance of providing robust defense
strategies against this kind of attacks as well.
For this purpose, this work applies a transfer learning approach to train a state-
of-the-art face recognition system with a relatively small dataset which is then later
used for evaluation. It is also tested whether the transfer learning approach itself
succeeds even when the transferred model was trained on a much broader task than
face recognition. Further on, different existing defense strategies are highlighted and
assessed in terms of effectiveness in defending the adversarial glasses by Sharif et al..
Building upon this, one of the approaches, a method based on the perturbed image’s
saliency map, is applied and optimized by performing a grid search to find the method’s
hyper-parameter combination that leads to the best defensive behaviour. By empirical
evaluation it is found that due to the application of the saliency mask method, a
trained face recognition system is enabled to significantly reduce its rate at which
the reproduced adversarial attack succeeds from 85.0% to a value of down to 16.7%.
Furthermore, up to 56.9% of the previously succeeding attacks could be predicted as
the correct class after applying the defense. Finally, the outcomes of the experiments
are interpreted and possible reasons are discussed which prevented the saliency mask
method from achieving better results.