TUM Logo

Adversarial and Secure Machine Learning

Adversarial and Secure Machine Learning  

Seminare 2sws / 5ects (Kursbeschreibung)
Veranstalter: Bojan Kolosnjaji and Ching-Yu Kao
Zeit und Ort:

DO 25.04., DI 14.05., DO 16.05., DI 21.05., DO 23.05., DI 28.05. und FR 31.05.

16 - 18 Uhr / Seminarraum 01.08.033

Beginn: 2019-04-23

The lecture is given in english
The slides are available in english
The exam will be in english

News:

  • The Kick-off Meeting is finished. If you couldn't attend it, don't worry. Here are the slides

Preliminary meeting

Preliminary meeting: Tuesday, January 29, 2019 at 16:30 in room 01.08.033.

 

Researchers and engineers of information security have successfully deployed
systems with machine learning and data mining techniques for detecting
suspicious activities, filtering spam, recognizing threats, etc. These systems
typically contain a classifier that flags certain instances as malicious based
on a set of features.

Unfortunately, there is evidence showing that adversaries have investigated
several approaches to deceive a classifier by disguising a malicious instance as
innocent. For example, some spammers may add unrelated words or sentences to a
junk mail for avoiding detection of a spam filter. Furthermore, some adversaries
may be capable to design training data that will mislead the learning algorithm.

The ongoing war between adversaries and classifiers pressures us to reconsider
the vulnerabilities of learning algorithms, forming a research field known as
adversarial learning. The goal is to develop highly robust learning algorithms
in the adversarial environment.

In this seminar, several hot topics in this line of research will be discussed.
The intention was to provide students with an overview of state-of-the-art
attack/defense machine learning algorithms, so as to encourage them continuing
the exploration of this field. Each student will be assigned two research
papers. After studying the papers, students are required to write a short report
and make a 45-minute presentation about their understanding of the papers.

 

Some of the possible topics in the seminar:

● Evasion of machine learning classification algorithms

● Feature selection in adversarial environment

● Attacks on Support Vector Machines (SVM)

● Connections of Robustness and Regularization in SVM

● Analysis of adversarial examples for Neural Networks

● Adversarial attacks on reinforcement learning, sequence labeling, structured prediction

● Generative Adversarial Networks, Adversarial Autoencoders

● Techniques for increasing robustness of Neural Networks

● Adversarial attacks on spam detection

● Poisoning malware clustering

● Evading malware detection systems

● Attacks on graph-based anomaly detection in DNS data

 

This is a block-seminar, where most of the meetings will be in May 2019. To find out more, check out slides from the kick-off meeting or look up the dates on the top of the page.