TUM Logo

Adversarial Attacks and Defenses in Quantum Machine Learning

Quantum machine learning (QML) is an emerging field that aims to combine the advantages of machine learning and quantum computing. This bachelor thesis examines the behavior of QML models in an adversarial set- ting. An adversarial attack aims to deceive QML systems by marginally altering the model input. The change is so slight that it is not humanly perceptible, but by design has a significant impact on the model. The focus of this thesis is on the comparison of three data encoding tech- niques: data re-uploading, angle encoding, and amplitude encoding. Data encoding, which transforms classical data into quantum states that may then be used as input to a QML model, is a key component of quantum computing. The importance of the encoding strategy in the accomplish- ment of a classification task is investigated, as well as its impact on the robustness of the models. To this end, systems implementing the various encoding methods are trained on a two-dimensional data set and are then adversarially attacked. The classification task is successfully completed by all three models. How- ever, each of the three is equally vulnerable to the attack, as evidenced by a significant decrease in classification accuracy after the attack. Further- more, the application of a defense strategy does not result in an increase in robustness. The results of this work are consistent with those of previous studies that attest QML models similar vulnerability to the considered attack as exhib- ited by classical machine learning algorithms. This work serves as a basis for further investigations in this area while highlighting the remaining need for research in quantum adversarial machine learning.

Adversarial Attacks and Defenses in Quantum Machine Learning

Supervisor(s): Pascal Debus
Status: finished
Topic: Machine Learning Methods
Author: Inken Grüner
Submission: 2022-09-15
Type of Thesis: Bachelorthesis
Proof of Concept No
Thesis topic in co-operation with the Fraunhofer Institute for Applied and Integrated Security AISEC, Garching

Astract:

Quantum machine learning (QML) is an emerging field that aims to combine the advantages of machine learning and quantum computing. This bachelor thesis examines the behavior of QML models in an adversarial set- ting. An adversarial attack aims to deceive QML systems by marginally altering the model input. The change is so slight that it is not humanly perceptible, but by design has a significant impact on the model. The focus of this thesis is on the comparison of three data encoding tech- niques: data re-uploading, angle encoding, and amplitude encoding. Data encoding, which transforms classical data into quantum states that may then be used as input to a QML model, is a key component of quantum computing. The importance of the encoding strategy in the accomplish- ment of a classification task is investigated, as well as its impact on the robustness of the models. To this end, systems implementing the various encoding methods are trained on a two-dimensional data set and are then adversarially attacked. The classification task is successfully completed by all three models. How- ever, each of the three is equally vulnerable to the attack, as evidenced by a significant decrease in classification accuracy after the attack. Further- more, the application of a defense strategy does not result in an increase in robustness. The results of this work are consistent with those of previous studies that attest QML models similar vulnerability to the considered attack as exhib- ited by classical machine learning algorithms. This work serves as a basis for further investigations in this area while highlighting the remaining need for research in quantum adversarial machine learning.