TUM Logo

Life Long Learning on Language Models and its Robustness

Life Long Learning on Language Models and its Robustness

Supervisor(s): Ching-Yu Kao
Status: finished
Topic: Others
Author: Danial Raza
Submission: 2020-12-15
Type of Thesis: Masterthesis
Thesis topic in co-operation with the Fraunhofer Institute for Applied and Integrated Security AISEC, Garching

Description

The  classical  isolation  learning  methods  suffers  from  Catastrophic  Forgetting  (CF)  when trained on multiple datasets in a streaming fashion. Life Long Learning (LLL) is a machine learning paradigm that deals with learning of multiple datasets sequentially over time without the risk of CF. It is a well-established notion that machine learning models which are trained on multiple datasets tend to be more robust.  We empirically proved this notion for LLL methods in Natural Language Processing (NLP). Moreover, we incorporated known defense strategies against adversarial attacks with LLL and evaluated its impact on robustness of the models.  To the best of our knowledge, our work is the first to evaluate and compare the robustness of LLL methods and isolation learning methods in NLP.