Publication Type
Conference Proceeding Article
Version
acceptedVersion
Publication Date
5-2025
Abstract
Humans are accustomed to reading and writing in a forward manner, and this natural bias extends to text understanding in auto-regressive large language models (LLMs). This paper investigates whether LLMs, like humans, struggle with reverse modeling, specifically with reversed text inputs. We found that publicly available pre-trained LLMs cannot understand such inputs. However, LLMs trained from scratch with both forward and reverse texts can understand them equally well during inference. Our case study shows that different-content texts result in different losses if input (to LLMs) in different directions---some get lower losses for forward while some for reverse. This leads us to a simple and nice solution for data selection based on the loss differences between forward and reverse directions. Using our selected data in continued pretraining can boost LLMs' performance by a large margin for the task of Massive Multitask Language Understanding.
Discipline
Artificial Intelligence and Robotics | Programming Languages and Compilers
Areas of Excellence
Digital transformation
Publication
Proceedings of the 2025 Annual Conference of the Nations of the Americas Chapter of the ACL, NAACL'25, Albuquerque, New Mexico, April 29 - May 4
First Page
1
Last Page
13
City or Country
New Mexico
Citation
YU, Sicheng; XU, Yuanchen; DU, Cunxiao; ZHOU, Yanying; QIU, Minghui; SUN, Qianru; ZHANG, Hao; and WU, Jiawei.
Reverse modeling in large language models. (2025). Proceedings of the 2025 Annual Conference of the Nations of the Americas Chapter of the ACL, NAACL'25, Albuquerque, New Mexico, April 29 - May 4. 1-13.
Available at: https://ink.library.smu.edu.sg/sis_research/10146
Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Included in
Artificial Intelligence and Robotics Commons, Programming Languages and Compilers Commons