Publication Type
Conference Proceeding Article
Version
publishedVersion
Publication Date
6-2024
Abstract
Large language models (LLMs) have shown excellent performance on various NLP tasks. To use LLMs as strong sequential recommenders, we explore the in-context learning approach to sequential recommendation. We investigate the effects of instruction format, task consistency, demonstration selection, and number of demonstrations. As increasing the number of demonstrations in ICL does not improve accuracy despite using a long prompt, we propose a novel method called LLMSRec-Syn that incorporates multiple demonstration users into one aggregated demonstration. Our experiments on three recommendation datasets show that LLMSRec-Syn outperforms state-of-the-art LLM-based sequential recommendation methods. In some cases, LLMSRec-Syn can perform on par with or even better than supervised learning methods. Our code is publicly available at https://github.com/demoleiwang/LLMSRec_Syn.
Keywords
Large language models, LLMs, Sequential recommendation
Discipline
Artificial Intelligence and Robotics | Computer Sciences
Research Areas
Data Science and Engineering
Publication
Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2024) : Mexico City, Mexico, June 16-21
First Page
876
Last Page
895
Identifier
10.18653/v1/2024.findings-naacl.56
Publisher
Association for Computational Linguistics
City or Country
Mexico City
Citation
LEI, Wang and LIM, Ee-Peng.
The whole is better than the sum : Using aggregated demonstrations in in-context learning for sequential recommendation. (2024). Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2024) : Mexico City, Mexico, June 16-21. 876-895.
Available at: https://ink.library.smu.edu.sg/sis_research/9786
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
https://doi.org/10.18653/v1/2024.findings-naacl.56