TUM Logo

Robustness Against Adversarial Examples and Noise

Robustness Against Adversarial Examples and Noise

Supervisor(s): Karla Markert, Ching-Yu Kao
Status: finished
Topic: Others
Author: Stefan Niclas Heun
Submission: 2022-02-15
Type of Thesis: Bachelorthesis
Thesis topic in co-operation with the Fraunhofer Institute for Applied and Integrated Security AISEC, Garching

Description

Automatic speech recognition (ASR) systems have become a crucial part of human-computer interaction through the rise

of voice-controlled smart assistants. The everyday use further requires robust systems. This meaning of robustness is

two-fold: we want systems to be able to deal with noise and we want them to be secure against adversarial attacks. In audio

adversarial attacks, audio inputs are perturbed in such a way that they sound harmless to humans but fool the system into a

miss-classification. Noise robustness, on the other side, describes a classifier’s capability to transcribe an input correctly even if

it is perturbed by different noises. Our detailed summary of the related work shows that the relationship between robustness against

adversarial examples and noise is controversially discussed in the domain of image classifiers and has not yet been evaluated in the

domain of speech recognition systems.

Hence, in this thesis we explore whether, and to which extent, adversarial robustness and noise robustness are related by a comparison

on the ASR system DeepSpeech. We therefore evaluate the impact of (general) noise augmented training on the noise robustness and

adversarial robustness. Our results imply that gaining noise robustness can significantly harm the adversarial robustness of a speech recognition

system. This suggests that adversarial and noise robustness are negatively correlated in the domain of speech recognition systems, and thus

we propose to further evaluate robustness strategies with respect to their impact on both types of robustness.