TUM Logo

Adversarial and Secure Machine Learning

Adversarial and Secure Machine Learning  

Seminare 2 SWS / 5 ECTS (Kursbeschreibung)
Veranstalter: Bojan Kolosnjaji and Huang Xiao
Zeit und Ort:

Di, 16-18:00 Uhr 01.06.011, Seminarraum

Beginn: 2018-04-09

The lecture is given in english
The slides are available in english

News:

Preliminary meeting has finished. We uploaded the slides for those who could not make it.

Slides from the first seminar meeting are uploaded here.

ATTENTION: No seminar meeting on 03.07. The talk from this date has been moved to the 10.07., as announced at the last meeting.

 

Preliminary meeting

Preliminary meeting: Monday, January 29, 2018 at 13:00 in room 01.08.033.

 

Researchers and engineers of information security have successfully deployed
systems with machine learning and data mining techniques for detecting
suspicious activities, filtering spam, recognizing threats, etc. These systems
typically contain a classifier that flags certain instances as malicious based
on a set of features.

Unfortunately, there is evidence showing that adversaries have investigated
several approaches to deceive a classifier by disguising a malicious instance as
innocent. For example, some spammers may add unrelated words or sentences to a
junk mail for avoiding detection of a spam filter. Furthermore, some adversaries
may be capable to design training data that will mislead the learning algorithm.

The ongoing war between adversaries and classifiers pressures us to reconsider
the vulnerabilities of learning algorithms, forming a research field known as
adversarial learning. The goal is to develop highly robust learning algorithms
in the adversarial environment.

In this seminar, several hot topics in this line of research will be discussed.
The intention was to provide students with an overview of state-of-the-art
attack/defense machine learning algorithms, so as to encourage them continuing
the exploration of this field. Each student will be assigned two research
papers. After studying the papers, students are required to write a short report
and make a 30-minute presentation about their understanding of the papers.

List of seminar topics and corresponding papers:

 

Topic 1: Foundation (15.05.2018.) (David Glavas)

Lowd, Daniel, and Christopher Meek. "Adversarial learning." Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining. ACM, 2005.

Barreno, Marco, et al. "Can machine learning be secure?." Proceedings of the 2006 ACM Symposium on Information, computer and communications security. ACM, 2006.

Topic 2: Attacks on SVM (15.05.2018.) (Alexander Lehner)

Biggio, Battista, Blaine Nelson, and Pavel Laskov. "Poisoning attacks against support vector machines." Proceedings of the 29th International Conference on International Conference on Machine Learning. Omnipress, 2012.

Xiao, H.H., Xiao, H.H. & Eckert, C., 2012. Adversarial Label Flips Attack on Support Vector Machines. In 20th European Conference on Artificial Intelligence (ECAI). Montpellier, France. Available at: http://ebooks.iospress.nl/volumearticle/7084 

Topic 3: Improving the Robustness of SVM (29.05.2018.) (Paul Schmidl)

Xu, Huan, Constantine Caramanis, and Shie Mannor. "Robustness and regularization of support vector machines." Journal of Machine Learning Research 10.Jul (2009): 1485-1510.

Biggio, Battista, Blaine Nelson, and Pavel Laskov. "Support vector machines under adversarial label noise." Asian Conference on Machine Learning. 2011.

Topic 4: Feature selection and adversarial learning (29.05.2018.) (Gence Özer)

Globerson, Amir, and Sam Roweis. "Nightmare at test time: robust learning by feature deletion." Proceedings of the 23rd international conference on Machine learning. ACM, 2006.

Xiao, H. et al., 2015. Is Feature Selection Secure against Training Data Poisoning ? Int’l Conf. on Machine Learning (ICML), 37.

Topic 5: Neural networks under attack (05.06.2018.) (Marwin Sandner)

Szegedy, C. et al., 2013. Intriguing properties of neural networks. CoRR, pp.1–10. Available at: http://arxiv.org/abs/1312.6199[Accessed July 15, 2014].

Goodfellow, I.J., Shlens, J. & Szegedy, C., 2014. Explaining and Harnessing Adversarial Examples. , pp.1–11. Available at: http://arxiv.org/abs/1412.6572.

Topic 6: Neural networks - adversarial examples (05.06.2018.) (Mykyta Denysov)

Moosavi-Dezfooli, S.-M. et al., 2017. Universal adversarial perturbations. In CVPR. Available at: http://arxiv.org/abs/1705.09554.

Tanay, Thomas, and Lewis Griffin. "A Boundary Tilting Perspective on the Phenomenon of Adversarial Examples." arXiv preprint arXiv:1608.07690 (2016).

Topic 7: Neural Networks - sparsity and robustness (12.06.2018.) (Elias Marquart)

Papyan, Vardan, Yaniv Romano, and Michael Elad. "Convolutional neural networks analyzed via convolutional sparse coding." stat 1050 (2016): 27.

Cisse, Moustapha, et al. "Parseval networks: Improving robustness to adversarial examples." International Conference on Machine Learning. 2017.

Topic 8: Neural networks - general strategies for defense (12.06.2018.) (Kilian Batzner)

Bendale, Abhijit, and Terrance E. Boult. "Towards open set deep networks." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016.

Liu, Weiyang, et al. "Large-Margin Softmax Loss for Convolutional Neural Networks." ICML. 2016.

Topic 9: Ensembles against adversaries (19.06.2018.) (Dmytro Rybalko)

Biggio, Battista, Giorgio Fumera, and Fabio Roli. "Multiple classifier systems for robust classifier design in adversarial environments." International Journal of Machine Learning and Cybernetics 1.1-4 (2010): 27-41.

Tramèr, Florian, et al. "Ensemble adversarial training: Attacks and defenses." arXiv preprint arXiv:1705.07204 (2017).

Topic 10: Generative Adversarial Networks (19.06.2018.) (Lisa Buchner)

Goodfellow, Ian, et al. "Generative adversarial nets." Advances in neural information processing systems. 2014.

Denton, Emily L., Soumith Chintala, and Rob Fergus. "Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks." Advances in neural information processing systems. 2015.

Topic 11: Attacks on Reinforcement Learning (26.06.2018.) (Felix Opolka)

Lin, Yen-Chen, et al. "Tactics of adversarial attack on deep reinforcement learning agents." Proceedings of the 26th International Joint Conference on Artificial Intelligence. AAAI Press, 2017.

Huang, Sandy, et al. "Adversarial attacks on neural network policies." arXiv preprint arXiv:1702.02284 (2017).

Topic 12: Attacks on Natural Language Processing (26.06.2018.) (Matthias Humt)

Lowd, Daniel, and Christopher Meek. "Good Word Attacks on Statistical Spam Filters." CEAS. Vol. 2005. 2005.

Mei, Shike, and Xiaojin Zhu. "The security of latent dirichlet allocation." Artificial Intelligence and Statistics. 2015.

Topic 13: Evading malware detection (10.07.2018.) (Martin Schonger)

Grosse, Kathrin, et al. "Adversarial examples for malware detection." European Symposium on Research in Computer Security. Springer, Cham, 2017.

Hu, W. & Tan, Y., 2017. Generating Adversarial Malware Examples for Black-Box Attacks Based on GAN. Available at: http://arxiv.org/abs/1702.05983.

Topic 14: Attacks on malware detection - injecting noise and poisoning (03.07.2018.) (cancelled)

Biggio, Battista, et al. "Poisoning behavioral malware clustering." Proceedings of the 2014 workshop on artificial intelligence and security . ACM, 2014.

Chen, Yizheng, et al. "Practical attacks against graph-based clustering." Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. ACM, 2017.

Topic 15: Attacks on PDF malware detection (10.07.2018.) (Michael Blind)

Xu, Weilin, Yanjun Qi, and David Evans. "Automatically evading classifiers." Proceedings of the 2016 Network and Distributed Systems Symposium. 2016.

Smutz, Charles, and Angelos Stavrou. "When a Tree Falls: Using Diversity in Ensemble Classifiers to Identify Evasion in Malware Detectors." NDSS. 2016.