LLMs-as-instructors : Learning from errors toward automating model improvement
Publication Type
Conference Proceeding Article
Publication Date
11-2024
Abstract
This paper introduces the innovative "LLMs-as-Instructors'' framework, which leverages the advanced Large Language Models (LLMs) to autonomously enhance the training of smaller target models. Inspired by the theory of "Learning from Errors'', this framework employs an instructor LLM to meticulously analyze the specific errors within a target model, facilitating targeted and efficient training cycles. Within this framework, we implement two strategies: "Learning from Error," which focuses solely on incorrect responses to tailor training data, and "Learning from Error by Contrast,'' which uses contrastive learning to analyze both correct and incorrect responses for a deeper understanding of errors.Our empirical studies, conducted with several open-source models, demonstrate significant improvements across multiple benchmarks, including mathematical reasoning, coding abilities, and factual knowledge. Notably, the refined Llama-3-8b-Instruction has outperformed ChatGPT, illustrating the effectiveness of our approach. By leveraging the strengths of both strategies, we have attained a more balanced performance improvement on both in-domain and out-of-domain benchmarks.
Keywords
Large Language Models, LLM, Learning from errors, Model training
Discipline
Artificial Intelligence and Robotics
Research Areas
Data Science and Engineering; Intelligent Systems and Optimization
Publication
Empirical Methods in Natural Language Processing EMNLP Findings
Identifier
10.48550/arXiv.2407.00497
Publisher
Empirical Methods in Natural Language Processing Conference
City or Country
Miami
Citation
YING, Jiahao; LIN, Mingbao; CAO, Yixin; TANG, Wei; WANG, Bo; SUN, Qianru; HUANG, Xuanjing; and YAN, Shuicheng.
LLMs-as-instructors : Learning from errors toward automating model improvement. (2024). Empirical Methods in Natural Language Processing EMNLP Findings.
Available at: https://ink.library.smu.edu.sg/sis_research/9440
Additional URL
https://doi.org/10.48550/arXiv.2407.00497