Publication Type
Conference Proceeding Article
Version
acceptedVersion
Publication Date
12-2024
Abstract
The recent development of chain-of-thought (CoT) decoding has enabled large language models (LLMs) to generate explicit logical reasoning paths for complex problem-solving. However, research indicates that these paths are not always deliberate and optimal. The tree-of-thought (ToT) method employs tree-searching to extensively explore the reasoning space and find better reasoning paths that CoT decoding might overlook. This deliberation, however, comes at the cost of significantly increased inference complexity. In this work, we demonstrate that fine-tuning LLMs leveraging the search tree constructed by ToT allows CoT to achieve similar or better performance, thereby avoiding the substantial inference burden. This is achieved through Chain of Preference Optimization (CPO), where LLMs are fine-tuned to align each step of the CoT reasoning paths with those of ToT using the inherent preference information in the tree-search process. Extensive experimental results show that CPO significantly improves LLM performance in solving a variety of complex problems, including question answering, fact verification, and arithmetic reasoning, demonstrating its effectiveness. Our code is available at this https URL.
Discipline
Databases and Information Systems
Research Areas
Data Science and Engineering
Areas of Excellence
Digital transformation
Publication
Proceedings of the 38th Conference on Neural Information Processing Systems (NeurIPS 2024), Vancouver, Canada, December 10-15
First Page
1
Last Page
18
City or Country
USA
Citation
ZHANG, Xuan; DU, Chao; PANG, Tianyu; LIU, Qian; GAO, Wei; and LIN, Min.
Chain of preference optimization: Improving chain-of-thought reasoning in LLMs. (2024). Proceedings of the 38th Conference on Neural Information Processing Systems (NeurIPS 2024), Vancouver, Canada, December 10-15. 1-18.
Available at: https://ink.library.smu.edu.sg/sis_research/9881
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
https://openreview.net/pdf?id=2cczgOfMP4
Comments
The paper has been presented in the conference, but the official proceedings have not been online yet.