TUM Logo

From Explainability to Robustness in Machine Learning based Malware Detection

From Explainability to Robustness in Machine Learning based Malware Detection

Supervisor(s): Fabian Franzen Bojan Kolosnjaji
Status: finished
Topic: Machine Learning Methods
Author: Joao Balisa Neto
Submission: 2020-04-15
Type of Thesis: Masterthesis

Description

Complex machine learning models, like neural networks can perform successfully, although without transparency in their results, since, by definition, they are not explainable. Regardless of the application, it is important to assess the model’s predictions.

In this study, local interpretable model-agnostic explanations are used to understand which features does MalConv focus on. A mapping between these features and the corresponding portable executable’s sections were done.

The focus of this study is to determine the importance of each feature and use this data to improve MalConv’s robustness, using a L1-norm regularization technique.