TUM Logo

Visualization Methods for Adversarial Examples

Visualization Methods for Adversarial Examples

Supervisor(s): Karla Markert, Philip Sperl
Status: finished
Topic: Others
Author: Romain Parracone
Submission: 2020-11-16
Type of Thesis: Masterthesis
Thesis topic in co-operation with the Fraunhofer Institute for Applied and Integrated Security AISEC, Garching

Description

Automatic Speech Recognition (ASR) is becoming increasingly important as ASR systems are becoming part of our daily life, notably through virtual assistants like Cortana or Siri and devices like Amazon Echo or Google Home. Most of today’s ASR systems are based on neural networks and their vulnerability to adversarial examples has become a matter of research interest. In parallel, the research for neural networks in the image domain has progressed. This includes methods for explaining the network’s predictions. New techniques, referred to as attribution methods, have been developed to visualise regions that strongly influence the image’s classification. The goal of this thesis is to determine if these methods are adaptable to audio neural networks and if they can be used to visualize and detect adversarial examples. The results of this work show that it is possible to use these interpretation methods for ASR neural networks. Moreover, the visualization of the attribution showed significant differences between benign data and adversarial examples which allows to correctly classify an input as benign or adversarial in most attack scenarios, with an accuracy up to 98%.