TUM Logo

Reconstructing images after adversarial patch removal

Reconstructing images after adversarial patch removal

Supervisor(s): Ching-Yu Kao
Status: finished
Topic: Others
Author: Houcemeddine Ben Ayed
Submission: 2022-08-15
Type of Thesis: Bachelorthesis
Thesis topic in co-operation with the Fraunhofer Institute for Applied and Integrated Security AISEC, Garching


In recent years image classifiers have been threatened by adversarial patches (also called stickers).

The attack consists of applying an adversarial patch to real-life objects (mostly printed on the object)

to cause a misclassification error by the classifier. 

Different frameworks were suggested to address this problem, mainly since it affects the performance

of different real-time applications such as autonomous driving and face detection. In our work, we focus

on one possible framework that tries to mitigate the attack by detecting the adversarial patch, removing it

from the image, and then feeding the image to the classifier either as-is or after reconstructing the removed

region (inpainting).

We test two hypotheses concerning this framework. The first one states that reconstructing the image improves

the classification results (compared to simply removing/masking the adversarial patch). Our second hypothesis

is that better inpainting implies better classification results.

We chose inpainting methods we deemed suitable for the task and ran experiments on two different datasets

simulating the work of the framework on two applications: face detection and autonomous driving. The results

of the experiments were promising and supported both our hypotheses.