Publication Type
Conference Proceeding Article
Version
acceptedVersion
Publication Date
1-2026
Abstract
Recent advances in reasoning-centric models promise improved robustness through mechanisms such as chain-of-thought prompting and test-time scaling. However, their ability to withstand gaslighting negation attacks—adversarial prompts that confidently deny correct answers—remains underexplored. In this paper, we conduct a systematic evaluation of three state-of-the-art reasoning models, i.e., OpenAI’s o4-mini, Claude-3.7-Sonnet and Gemini-2.5-Flash, across three multimodal benchmarks: MMMU, MathVista, and CharXiv. Our evaluation reveals significant accuracy drops (25–29% on average) following gaslighting negation attacks, indicating that even top-tier reasoning models struggle to preserve correct answers under manipulative user feedback. Built upon the insights of the evaluation and to further probe this vulnerability, we introduce GaslightingBench-R, a new diagnostic benchmark specifically designed to evaluate reasoning models’ susceptibility to defend their belief under gaslighting negation attacks. Constructed by filtering and curating 1,025 challenging samples from the existing benchmarks, GaslightingBench-R induces even more dramatic failures, with accuracy drops exceeding 53% on average. Our findings highlight a fundamental gap between step-by-step reasoning and resistance to adversarial manipulation, calling for new robustness strategies that safeguard reasoning models against gaslighting negation attacks. Additional details are available on our project page: https://binzhubz.github.io/GaslightingBench-R/.
Keywords
Gaslighting negation attacks, Multimodal reasoning, Reasoning models
Discipline
Artificial Intelligence and Robotics | Information Security | Software Engineering
Research Areas
Data Science and Engineering
Publication
Multimedia Modeling: 32nd International Conference on Multimedia Modeling, MMM 2026, Prague, Czech Republic, January 29-31, Proceedings
First Page
188
Last Page
202
ISBN
9789819569595
Identifier
10.1007/978-981-95-6960-1_14
Publisher
Springer
City or Country
Cham
Citation
ZHU, Bin; YIN, Hailong; CHEN, Jingjing; and JIANG, Yu Gang.
Benchmarking gaslighting negation attacks against reasoning models. (2026). Multimedia Modeling: 32nd International Conference on Multimedia Modeling, MMM 2026, Prague, Czech Republic, January 29-31, Proceedings. 188-202.
Available at: https://ink.library.smu.edu.sg/sis_research/11025
Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
https://doi.org/10.1007/978-981-95-6960-1_14
Included in
Artificial Intelligence and Robotics Commons, Information Security Commons, Software Engineering Commons