TUM Logo

Detection of adversarial patches

Detection of adversarial patches

Supervisor(s): Ching-Yu Kao
Status: finished
Topic: Others
Author: Iheb Ghanmi
Submission: 2022-09-15
Type of Thesis: Bachelorthesis
Thesis topic in co-operation with the Fraunhofer Institute for Applied and Integrated Security AISEC, Garching

Description

      Deep Neural Networks (DNNs) are, despite their phenomenal success
      in tasks such as object detection and classification, prone to
      being manipulated [1]. This can be done through deliberate and
      careful manipulation of input images by what is known as
      adversarial attacks. Though the introduced noise could be
      distributed throughout the whole image and thus undetectable by a
      human, many applications [2, 3] demonstrate the effectiveness of
      using adversarial patches, especially in real-time applications.
      We introduce in this thesis a method that can detect and remove
      adversarial patches. Our defense can be used on any adversarial
      input without prior knowledge of the used model. We are inspired
      by the observation of Local Gradient Smoothing (LGS), which is
      that patches introduce high-frequency noise to the image and that
      suppressing these regions would render the noise ineffective. We
      use the generated mask as an approximation of the patch location
      and construct a complete mask using the Shape Completion of
      Segment And Complete. Compared to the no-defense case, our method
      achieves relatively high accuracy and confidence scores. It also
      retains a good performance compared to LGS.