Detection Transformers Under the Knife: A Neuroscience-Inspired Approach to Ablations


Nils Hütten (Bergische Universität Wuppertal), Florian Hölken (Bergische Universität Wuppertal), Hasan Tercan (Bergische Universität Wuppertal), Tobias Meisen (Bergische Universität Wuppertal)
The 35th British Machine Vision Conference

Abstract

In recent years, Explainable artificial intelligence (XAI) has gained traction as an approach to enhancing model interpretability and transparency, particularly in complex models such as detection transformers. Despite rapid advancements, a substantial research gap remains in understanding the distinct roles of internal components - knowledge that is essential for improving transparency and efficiency. Inspired by neuroscientific ablation studies, which investigate the functions of brain regions through selective impairment, we systematically analyze the impact of ablating key components in three state-of-the-art detection transformer models: detection transformer (DETR), deformable detection transformer (DDETR), and detection transformer with denoising anchor boxes (DINO). The ablations target query embeddings, encoder and decoder multi-head self-attention (MHSA) as well as decoder multi-head cross-attention (MHCA) layers. We evaluate the consequences of these ablations on the performance metrics generalized intersection over union (gIoU) and F1-score, quantifying effects on both the classification and regression sub-tasks on the COCO dataset. To facilitate reproducibility and future research, we publicly release the DeepDissect library. Our findings reveal model-specific resilience patterns: while DETR is particularly sensitive to ablations in encoder MHSA and decoder MHCA, DDETR's multi-scale deformable attention enhances robustness, and DINO exhibits the greatest resilience due to its look-forward twice update rule, which helps distributing knowledge across blocks. These insights also expose structural redundancies, particularly in DDETR's and DINO's decoder \ac{mhca} layers, highlighting opportunities for model simplification without sacrificing performance. This study advances XAI for detection transformers by clarifying the contributions of internal components to model performance, offering insights to optimize and improve transparency and efficiency in critical applications.

Citation

@inproceedings{Hütten_2025_BMVC,
author    = {Nils Hütten and Florian Hölken and Hasan Tercan and Tobias Meisen},
title     = {Detection Transformers Under the Knife: A Neuroscience-Inspired Approach to Ablations},
booktitle = {36th British Machine Vision Conference 2025, {BMVC} 2025, Sheffield, UK, November 24-27, 2025},
publisher = {BMVA},
year      = {2025},
url       = {https://bmva-archive.org.uk/bmvc/2025/assets/papers/Paper_291/paper.pdf}
}


Copyright © 2025 The British Machine Vision Association and Society for Pattern Recognition
The British Machine Vision Conference is organised by The British Machine Vision Association and Society for Pattern Recognition. The Association is a Company limited by guarantee, No.2543446, and a non-profit-making body, registered in England and Wales as Charity No.1002307 (Registered Office: Dept. of Computer Science, Durham University, South Road, Durham, DH1 3LE, UK).

Imprint | Data Protection