Publication Type
Conference Proceeding Article
Version
acceptedVersion
Publication Date
12-2025
Abstract
Multilingual reasoning remains a significant challenge for large language models (LLMs), with performance disproportionately favoring high-resource languages. Drawing inspiration from cognitive neuroscience, which suggests that human reasoning functions largely independently of language processing, we hypothesize that LLMs similarly encode reasoning and language as separable components that can be disentangled to enhance multilingual reasoning. To evaluate this, we perform a causal intervention by ablating language-specific representations at inference time. Experiments on 10 open-weight LLMs spanning 11 typologically diverse languages show that this language-specific ablation consistently boosts multilingual reasoning performance. Layer-wise analyses further confirm that language and reasoning representations can be effectively disentangled throughout the model, yielding improved multilingual reasoning capabilities, while preserving top-layer language features remains essential for maintaining linguistic fidelity. Compared to post-training methods such as supervised fine-tuning or reinforcement learning, our training-free language-reasoning disentanglement achieves comparable or superior results with minimal computational overhead. These findings shed light on the internal mechanisms underlying multilingual reasoning in LLMs and suggest a lightweight and interpretable strategy for improving cross-lingual generalization.
Discipline
Artificial Intelligence and Robotics | Programming Languages and Compilers
Research Areas
Intelligent Systems and Optimization
Areas of Excellence
Digital transformation
Publication
Proceedings of the Thirty-Ninth Advances in Neural Information Processing Systems (NeurIPS 2025), San Diego, CA, USA, November 30 - December 5
First Page
1
Last Page
35
City or Country
USA
Citation
ZHAO, Weixiang; GUO, Jiahe; DENG, Yang; WU, Tongtong; ZHANG, Wenxuan; HU, Yulin; SUI, Xingyu; ZHAO, Yanyan; CHE, Wanxiang; QIN, Bing; CHUA, Tat-Seng; and LIU, Ting.
When less language is more: Language-reasoning disentanglement makes LLMs better multilingual reasoners. (2025). Proceedings of the Thirty-Ninth Advances in Neural Information Processing Systems (NeurIPS 2025), San Diego, CA, USA, November 30 - December 5. 1-35.
Available at: https://ink.library.smu.edu.sg/sis_research/10738
Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
https://openreview.net/pdf?id=fleQlZ2VTx
Included in
Artificial Intelligence and Robotics Commons, Programming Languages and Compilers Commons