Mitigating Hallucinations in Multimodal LLMs via Object-aware Preference Optimization


Alberto Compagnoni (University of Modena and Reggio Emilia), Davide Caffagni (University of Modena and Reggio Emilia), Nicholas Moratelli (University of Modena and Reggio Emilia), Lorenzo Baraldi (University of Modena and Reggio Emilia ), Marcella Cornia (University of Modena and Reggio Emilia), Rita Cucchiara (University of Modena and Reggio Emilia)
The 35th British Machine Vision Conference

Abstract

Multimodal Large Language Models (MLLMs) emerge as a unified interface to address a multitude of tasks, ranging from NLP to computer vision. Despite showcasing state-of-the-art results in many benchmarks, a long-standing issue is the tendency of MLLMs to hallucinate, that is to generate answers to the user's query that are not reflected in the visual input. In this paper, we address the problem of hallucinations as an alignment problem, seeking to steer the MLLM so that it prefers generating content without hallucinations. In contrast to recent approaches that require complicated pipelines to build synthetic preference data for alignment training, often relying on proprietary models, we capitalize on the well-known CHAIR metric, originally proposed to gauge the degree of hallucinations in image captioning. Given a pair of generated answers, we leverage CHAIR to distinguish winner and loser options (i.e., non-hallucinated and hallucinated samples) and fine-tune off-the-shelf MLLMs via Direct Preference Optimization (DPO). The resulting method, which we refer to as CHAIR-DPO, effectively diminishes the amount of hallucinated answers on several hallucination benchmarks, demonstrating the effectiveness of fine-tuning the MLLM with a CHAIR-based reward. Source code and trained models are publicly available at https://github.com/aimagelab/CHAIR-DPO.

Citation

@inproceedings{Compagnoni_2025_BMVC,
author    = {Alberto Compagnoni and Davide Caffagni and Nicholas Moratelli and Lorenzo Baraldi and Marcella Cornia and Rita Cucchiara},
title     = {Mitigating Hallucinations in Multimodal LLMs via Object-aware Preference Optimization},
booktitle = {36th British Machine Vision Conference 2025, {BMVC} 2025, Sheffield, UK, November 24-27, 2025},
publisher = {BMVA},
year      = {2025},
url       = {https://bmva-archive.org.uk/bmvc/2025/assets/papers/Paper_666/paper.pdf}
}


Copyright © 2025 The British Machine Vision Association and Society for Pattern Recognition
The British Machine Vision Conference is organised by The British Machine Vision Association and Society for Pattern Recognition. The Association is a Company limited by guarantee, No.2543446, and a non-profit-making body, registered in England and Wales as Charity No.1002307 (Registered Office: Dept. of Computer Science, Durham University, South Road, Durham, DH1 3LE, UK).

Imprint | Data Protection