TUM Logo

Adversarial and Secure Machine Learning

Adversarial and Secure Machine Learning  

Seminare 2 SWS / 5 ECTS (Kursbeschreibung)
Veranstalter: Bojan Kolosnjaji and Ching-Yu Kao
Zeit und Ort:

DO 25.04., DI 14.05., DO 16.05., DI 21.05., DO 23.05., DI 28.05. und FR 31.05.

16 - 18 Uhr / Seminarraum 01.08.033

Beginn: 2019-04-23

The lecture is given in english
The slides are available in english
The exam will be in english

News:

  • The Introductory Meeting is finished, here are the slides.
  • The Kick-off Meeting is finished. If you couldn't attend it, don't worry. Here are the slides.
  • Matching is finished, all the seminar slots are filled.
  • Seminar topics are published on the bottom of this page. We ask the students to send an ordered list of three favorite topics to kolosnjaji@sec.in.tum.de until 03.03.

Preliminary meeting

Preliminary meeting: Tuesday, January 29, 2019 at 16:30 in room 01.08.033.

 

Researchers and engineers of information security have successfully deployed
systems with machine learning and data mining techniques for detecting
suspicious activities, filtering spam, recognizing threats, etc. These systems
typically contain a classifier that flags certain instances as malicious based
on a set of features.

Unfortunately, there is evidence showing that adversaries have investigated
several approaches to deceive a classifier by disguising a malicious instance as
innocent. For example, some spammers may add unrelated words or sentences to a
junk mail for avoiding detection of a spam filter. Furthermore, some adversaries
may be capable to design training data that will mislead the learning algorithm.

The ongoing war between adversaries and classifiers pressures us to reconsider
the vulnerabilities of learning algorithms, forming a research field known as
adversarial learning. The goal is to develop highly robust learning algorithms
in the adversarial environment.

In this seminar, several hot topics in this line of research will be discussed.
The intention was to provide students with an overview of state-of-the-art
attack/defense machine learning algorithms, so as to encourage them continuing
the exploration of this field. Each student will be assigned two research
papers. After studying the papers, students are required to write a short report
and make a 45-minute presentation about their understanding of the papers.

 

 

This is a block-seminar, where most of the meetings will be in May 2019. To find out more, check out slides from the kick-off meeting or look up the dates on the top of the page.

 

Seminar topics:

 

1. Attacks on SVM, Thordur Atlason, Date: 14.05.

Biggio, Battista, Blaine Nelson, and Pavel Laskov. "Poisoning attacks against support vector machines." Proceedings of the 29th International Conference on International Conference on Machine Learning. Omnipress, 2012.

Xiao, H.H., Xiao, H.H. & Eckert, C., 2012. Adversarial Label Flips Attack on Support Vector Machines. In 20th European Conference on Artificial Intelligence (ECAI). Montpellier, France. Available at: http://ebooks.iospress.nl/volumearticle/7084

Zhou, Y., Kantarcioglu, M., Thuraisingham, B., & Xi, B. (2012, August). Adversarial support vector machine learning. In Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining (pp. 1059-1067). ACM.

2. Robustness and regularization in SVM, Henrik von Kleist, Date: 21.05.

Xu, Huan, Constantine Caramanis, and Shie Mannor. "Robustness and regularization of support vector machines." Journal of Machine Learning Research 10.Jul (2009): 1485-1510.

Biggio, Battista, Blaine Nelson, and Pavel Laskov. "Support vector machines under adversarial label noise." Asian Conference on Machine Learning. 2011.

Demontis, A., Biggio, B., Fumera, G., Giacinto, G., & Roli, F. (2017). Infinity-Norm Support Vector Machines Against Adversarial Label Contamination. In ITASEC (pp. 106-115).

3. Feature selection and adversarial learning, Stefan Su, Date: 14.05.

Globerson, Amir, and Sam Roweis. "Nightmare at test time: robust learning by feature deletion." Proceedings of the 23rd international conference on Machine learning. ACM, 2006.

Xiao, H. et al., 2015. Is Feature Selection Secure against Training Data Poisoning ? Int’l Conf. on Machine Learning (ICML), 37.

Xu, Weilin, David Evans, and Yanjun Qi. "Feature squeezing: Detecting adversarial examples in deep neural networks." arXiv preprint arXiv:1704.01155 (2017).

4. Neural networks under attack, Simon Kibler, Date: 16.05.

Szegedy, C. et al., 2013. Intriguing properties of neural networks. CoRR, pp.1–10. Available at: http://arxiv.org/abs/1312.6199[Accessed July 15, 2014].

Goodfellow, I.J., Shlens, J. & Szegedy, C., 2014. Explaining and Harnessing Adversarial Examples. , pp.1–11. Available at: http://arxiv.org/abs/1412.6572.

Nguyen, A., Yosinski, J., & Clune, J. (2015). Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 427-436).

5. Neural networks - adversarial examples, Muhammad Faizan, Date: 16.05.

Moosavi-Dezfooli, S.-M. et al., 2017. Universal adversarial perturbations. In CVPR. Available at: http://arxiv.org/abs/1705.09554.

Tanay, Thomas, and Lewis Griffin. "A Boundary Tilting Perspective on the Phenomenon of Adversarial Examples." arXiv preprint arXiv:1608.07690 (2016).

Kurakin, Alexey, Ian Goodfellow, and Samy Bengio. "Adversarial examples in the physical world." arXiv preprint arXiv:1607.02533(2016).

6. Neural Networks - sparsity and robustness, Roland Würsching, Date: 21.05.

Papyan, Vardan, Yaniv Romano, and Michael Elad. "Convolutional neural networks analyzed via convolutional sparse coding." stat 1050 (2016): 27.

Cisse, Moustapha, et al. "Parseval networks: Improving robustness to adversarial examples." International Conference on Machine Learning. 2017.

7. Neural networks - general strategies for defense, Sho Maeoki, Date: 23.05.

Bendale, Abhijit, and Terrance E. Boult. "Towards open set deep networks." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016.

Liu, Weiyang, et al. "Large-Margin Softmax Loss for Convolutional Neural Networks." ICML. 2016.

Madry, A., Makelov, A., Schmidt, L., Tsipras, D., & Vladu, A. (2017). Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083.

8. Robustness, defense, verification

Hein, M., & Andriushchenko, M. (2017). Formal guarantees on the robustness of a classifier against adversarial manipulation. In Advances in Neural Information Processing Systems (pp. 2266-2276).

Raghunathan, A., Steinhardt, J., & Liang, P. (2018). Certified defenses against adversarial examples. arXiv preprint arXiv:1801.09344.

9. Ensembles against adversaries, Patrik Schäffler, Date: 23.05.

Biggio, Battista, Giorgio Fumera, and Fabio Roli. "Multiple classifier systems for robust classifier design in adversarial environments." International Journal of Machine Learning and Cybernetics 1.1-4 (2010): 27-41.

Tramèr, Florian, et al. "Ensemble adversarial training: Attacks and defenses." arXiv preprint arXiv:1705.07204 (2017).

10. Generative Adversarial Networks, Vadim Goryanov, Date: 28.05.

Goodfellow, Ian, et al. "Generative adversarial nets." Advances in neural information processing systems. 2014.

Denton, Emily L., Soumith Chintala, and Rob Fergus. "Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks." Advances in neural information processing systems. 2015.

Arjovsky, M., Chintala, S., & Bottou, L. (2017, July). Wasserstein generative adversarial networks. In International Conference on Machine Learning (pp. 214-223)

11. Attacks on Reinforcement Learning, Date: 28.05.

Lin, Yen-Chen, et al. "Tactics of adversarial attack on deep reinforcement learning agents." Proceedings of the 26th International Joint Conference on Artificial Intelligence. AAAI Press, 2017.

Huang, Sandy, et al. "Adversarial attacks on neural network policies." arXiv preprint arXiv:1702.02284 (2017).

Pattanaik, A., Tang, Z., Liu, S., Bommannan, G., & Chowdhary, G. (2018, July). Robust deep reinforcement learning with adversarial attacks. In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems(pp. 2040-2042). International Foundation for Autonomous Agents and Multiagent Systems.

12. Attacks on Natural Language Processing, Wan-Chen Chen, Date: 31.05.

Lowd, Daniel, and Christopher Meek. "Good Word Attacks on Statistical Spam Filters." CEAS. Vol. 2005. 2005.

Mei, Shike, and Xiaojin Zhu. "The security of latent dirichlet allocation." Artificial Intelligence and Statistics. 2015.

Chen, H., Zhang, H., Chen, P. Y., Yi, J., & Hsieh, C. J. (2017). Attacking visual language grounding with adversarial examples: A case study on neural image captioning. arXiv preprint arXiv:1712.02051.

13. Evading malware detection, Nicolas Jakob, Date: 31.05.

Grosse, Kathrin, et al. "Adversarial examples for malware detection." European Symposium on Research in Computer Security. Springer, Cham, 2017.

Hu, W. & Tan, Y., 2017. Generating Adversarial Malware Examples for Black-Box Attacks Based on GAN. Available at: http://arxiv.org/abs/1702.05983.

Yang, W., Kong, D., Xie, T., & Gunter, C. A. (2017, December). Malware detection in adversarial settings: Exploiting feature evolutions and confusions in android apps. In Proceedings of the 33rd Annual Computer Security Applications Conference (pp. 288-302). ACM.

14. Attacks on malware detection - injecting noise and poisoning, Tamara Schnyrina, 04.06.

Biggio, Battista, et al. "Poisoning behavioral malware clustering." Proceedings of the 2014 workshop on artificial intelligence and security . ACM, 2014.

Chen, Yizheng, et al. "Practical attacks against graph-based clustering." Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. ACM, 2017.

Rubinstein, B. I., Nelson, B., Huang, L., Joseph, A. D., Lau, S. H., Rao, S., ... & Tygar, J. D. (2009, November). Antidote: understanding and defending against poisoning of anomaly detectors. In Proceedings of the 9th ACM SIGCOMM conference on Internet measurement (pp. 1-14). ACM.

15. Attacks on PDF malware detection, Fabian Nhan, Date: 04.06.

Xu, Weilin, Yanjun Qi, and David Evans. "Automatically evading classifiers." Proceedings of the 2016 Network and Distributed Systems Symposium. 2016.

Smutz, Charles, and Angelos Stavrou. "When a Tree Falls: Using Diversity in Ensemble Classifiers to Identify Evasion in Malware Detectors." NDSS. 2016.

Maiorca, D., Biggio, B., & Giacinto, G. (2018). Towards Robust Detection of Adversarial Infection Vectors: Lessons Learned in PDF Malware. arXiv preprint arXiv:1811.00830.

16. Adversarial examples for audio data, Sebastian Bachem, Date: 06.06.

Carlini, Nicholas, and David Wagner. "Audio adversarial examples: Targeted attacks on speech-to-text." 2018 IEEE Security and Privacy Workshops (SPW). IEEE, 2018.

Yakura, Hiromu, and Jun Sakuma. "Robust Audio Adversarial Example for a Physical Attack." arXiv preprint arXiv:1810.11793 (2018).

Kreuk, F., Adi, Y., Cisse, M., & Keshet, J. (2018, April). Fooling End-To-End Speaker Verification With Adversarial Examples. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 1962-1966). IEEE

17. Adversarial attacks for graph data, Tim Pfeifle, Date: 06.06.

Zügner, Daniel, Amir Akbarnejad, and Stephan Günnemann. "Adversarial attacks on neural networks for graph data." Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. ACM, 2018.

Kosut, Oliver, et al. "Malicious data attacks on the smart grid." IEEE Transactions on Smart Grid 2.4 (2011): 645-658.