TUM Logo

A Stateful LLM Agent Approach to Adversarially Robust Object Detection

A Stateful LLM Agent Approach to Adversarially Robust Object Detection

Supervisor(s): Wei Herng Choong, Chingyu Kao
Status: finished
Topic: Others
Author: Iheb Ghanmi
Submission: 2025-09-01
Type of Thesis: Masterthesis
Thesis topic in co-operation with the Fraunhofer Institute for Applied and Integrated Security AISEC, Garching

Description

Deep neural networks excel at object detection but remain vulnerable to adversarial perturbations
that can severely degrade performance. This thesis explores whether agentic
large language models (LLMs), with their ability to reason, plan, and use tools, can enhance
robustness in detection pipelines. We introduce AdverGuard-LLM, a stateful controller that
couples an EfficientDet-D0 detector with modules for adversarial purification and consistency
checking, coordinated through a bounded loop.
Evaluation on COCO val2017 shows that the current design underperforms the vanilla
detector on clean images. AP@[.50:.95] (henceforth clean AP) drops most sharply on medium
and small objects, while large-object detection is less affected. These findings indicate that
LLM-based controllers, though not competitive in raw accuracy, provide a transparent and
extensible scaffold for integrating future, detector-aware defenses.

Keywords: Adversarial robustness, object detection, large language models, agentic LLMs