TUM Logo

Adversarial and Secure Machine Learning

If you know your enemies and know yourself, you will not be imperiled in a hundred battles; if you do not know your enemies but do know yourself, you will win one and lose one; if you do not know your enemies nor yourself, you will be imperiled in every single battle.

– Sun Tzu, The Art of War

Researchers and engineers of information security have successfully deployed systems using machine learning and data mining for detecting suspicious activities, filtering spam, recognizing threats, etc. These systems typically contain a classifier that flags certain instances as malicious based on a set of features. Unfortunately, there is evidence showing that adversaries have investigated several approaches to deceive the classifier by disguising malicious instance as innocent. For example, spammers can add unrelated words, sentences or even paragraphs to the junk mail for avoiding detection of the spam filter. Furthermore, the adversary has the ability to design training data that will mislead the learning algorithm. Not only applied on security domain, learning algorithms themselves may also get compromised if they are designed intrinsically with certain algorthmic flaws.

Therefore the ongoing war between adversaries and classifiers pressures machine learning researchers to reconsider the vulnerability of learning algorithms, forming the research field known as adversarial learning. We aims at studying adversary and defender from a theoretical viewpoint by leveraging the knowledge on convex geometry, optimization theory and game theory. By mimicing iterative games between adversarial learners and defenders (e.g., classifiers), we are able to gain insight of learning systems so as to discover their potential vulnerabilities. The ultimate goal is developing highly robust learning algorithms in the adversarial environment, even the adversaries could be in a perfect knowledge situation.

Researcher: Huang Xiao


 

Topic of Interests

  • Vulnerbility analysis of learning algorithms
  • Attack algorithms on learning algorithms
  • Learning from (adversarial) noisy data

Selected Talks

Invited talk @ MunichACM : Causative Adversarial Learning

ECAI 2012: Adversarial Label Flips Attack on Support Vector Machine

PAKDD 2012: Evasion Attack on Multi-class Linear Classifier, May 2012

Semniar talk: Robust Machine Learning in the Adversarial Setting, April 2012

Seminar

SS2013 - 2015 Seminar of Adversarial and Secure Machine Learning