TUM Logo

Analyzing the Robustness of Memory Augmented Neural Networks

Analyzing the Robustness of Memory Augmented Neural Networks

Supervisor(s): Dr. Huang Xiao
Status: finished
Topic: Others
Author: Daniel Kowatsch
Submission: 2018-10-15
Type of Thesis: Masterthesis
Thesis topic in co-operation with the Fraunhofer Institute for Applied and Integrated Security AISEC, Garching

Description

Recent studies have shown that attackers are capable of generating adversarial examples for CNNs with great success.
In order to defend against these attacks, techniques like Random Self Ensemble are deployed.
These defensive measures can also result in a decrease of accuracy.
This might be compensated by Memory Augmented Neural Networks, which are capable of one-shot learning.
Another recently introduced threat is the extraction of information contained in the training data given previously obtained or guessed structures.
In this thesis, we present, to the best of our knowledge, the first taxonomy for attacker models including privacy violations.
Furthermore, we have not been able to reproduce the results of 'The Secret Sharer: Measuring Unintended Neural Network Memorization & Extracting Secrets' and, thus, show that the threat of secret extraction is not present for all models, not even if they perform well and achieve an test accuracy of 99%.
We also show mathematical limitations to this approach, making it unsuitable for general secret extraction.
Additionally, we show that the Carlini & Wagner attack does achieve a success rate of 15.8% against sequential models at best and can easily be defeated by Random Self Ensemble without any significant loss in accuracy.
We did not find any significant benefits of using Memory Augmented Neural Networks in an adversarial setting.