TUM Logo

Audio Adversarial Examples

Audio Adversarial Examples  

Seminare 2 SWS / 5 ECTS
Veranstalter: Fabian Franzen
Zeit und Ort:

Preliminary talk: July 10, 14:00 - 14:45: Slides

Kick Off August 11 (Tuesday) 14:00-15:00
Presentations November 26 (Thursday) and November 27 (Friday), 9:00-17:00
Debriefing December 4 (Friday), 9:00-10:00 and January 15 (Friday), 9:00-10:00

 
Beginn: 2020-08-11

The lecture is given in english
The slides are available in english
The exam will be in english

Audio Adversarial Examples

Seminar winter term 2020 Level: Bachelor and Master Number of Participants: 8

Time: This course will be held as a block seminar. Preliminary talk: July 7, 15:00 - 15:45 Kick Off August 11 (Tuesday) 14:00-15:00 Presentations November 26 (Thursday) and November 27 (Friday), 9:00-17:00 Debriefing December 4 (Friday), 9:00-10:00 and January 15 (Friday), 9:00-10:00

Inhalt: Adversarial examples are instances that “machine learning models misclassify [... and] that are only slightly different from correctly classified examples” (Goodfellow at al., 2014). Hence, these instances would easily be correctly classified by a human but cause incorrect classification by machine learning algorithms. In 2018, Carlini and Wagner published the first paper that deals with adversarial audio attacks in great detail (Carlini and Wagner, 2018). They have inspired the research community to further elaborate the possibility of attacks on speech recognition and voice recognition algorithms when fed over a computer and recorded over-the- air.

With the spread of virtual assistants like Amazon Alexa, Apple HomePod, Cortana, Google Home or Siri, attacks on speech recognition models pose a threat on the users' privacy and their connected devices' security. Similarly, voice recognition systems may be used as biometric access control. If they can be tricked, the access control does not work reliably anymore.

In this seminar, we take a look at different audio adversarial attacks and possible mitigations. Every participant is required to give a presentation.

Requirements: Basic knowledge in machine learning (especially deep neural networks) and IT security. The preliminary meeting can be accessed using the following link:

https://teams.microsoft.com/l/meetup-join/19%3ameeting_Y2Q0ZTJiY2QtMjk3Ny00M2ExLWJmYWQtZjQ4NmUzYTNmMzc2%40thread.v2/0?context=%7b%22Tid%22%3a%22f930300c-c97d-4019-be03-add650a171c4%22%2c%22Oid%22%3a%22aee05ad9-032b-4f8f-a34b-09cda7c4a4fc%22%7d

Goal: 1) familiarization with scientific paper reading and scientific presentations; 2) better understanding of attacks against machine learning algorithms; 3) active participation and insights into topics of current research. For more information, see module description IN0014 and IN2107.

Language: English

Method: The seminar is organized as follows. Every participant gives a presentation on a scientific paper, which is assigned in the kick off session. Furthermore, every student is required to write a four page hand out summarizing the main points of the paper (LaTeX template will be provided).

The grade is composed up of: 10% active participation 25% presentation (structure of the talk, introduction to the topic, clear problem definition and motivation, sound style of delivery...) 25% hand out (language, structure of the hand out...) 40% quality of the content (main points of the paper, good discussion and outlook...)

We attach great importance to all students profiting from the others' presentations.
A short introduction to the topic of adversarial examples will be provided in the preliminary talk and kick off sessions.