TUM Logo

Defenses Against Adversarial Policies

Defenses Against Adversarial Policies

Supervisor(s): Philip Sperl
Status: finished
Topic: Others
Author: Pavel Czempin
Submission: 2021-07-16
Type of Thesis: Guided Research
Thesis topic in co-operation with the Fraunhofer Institute for Applied and Integrated Security AISEC, Garching

Description

Adversarial policies have shown that reinforcement learning models are vulnerable to attacks in the action space.

While there is some work on stronger attacks, so far defenses have not been evaluated in detail. Increasing the diversity

during training by training against multiple opponents, could make policies less susceptible to these attacks. We investigate

adversarial policies in low-dimensional environments. Our results show that some environments are especially susceptible to

adversarial policies, while some need slight modifications. We demonstrate a way to increase robustness against adversarial policies:

Population-based reinforcement learning increases the diversity of different policies and strategies encountered during training.

Using population-based reinforcement learning increases its robustness against adversarial policies.