TUM Logo

Anomaly detection methods to detect adversarial example

Anomaly detection methods to detect adversarial example

Supervisor(s): Ching-Yu Kao
Status: finished
Topic: Anomaly Detection
Author: Alexander Wagner
Submission: 2021-09-15
Type of Thesis: Bachelorthesis
Thesis topic in co-operation with the Fraunhofer Institute for Applied and Integrated Security AISEC, Garching

Description

Deep learning has shown significant performance results in the last few years and outperforms humans in some areas.

As a result, the popularity of deep learning systems is constantly rising and can be found in daily life. However, this performance also has its limitation. Recent studies show that deep learning systems are vulnerable to small perturbations to the input data.
These perturbations are called adversarial attacks and are intentionally crafted into the input data to cause misclassification of the output by the neuronal network.
This can lead to a significant security risk, as adversarial attacks massively weaken the performance of deep learning systems.
Therefore, it is crucial to detect adversarial attacks in deep learning systems.
There are supervised and unsupervised approaches to detect such attacks. Since the supervised approaches do not scale well enough, we will use an unsupervised approach in this thesis.
In this work, we present and study the performance of three different autoencoder architectures to detect adversarial attacks.