Publication Type

PhD Dissertation

Version

publishedVersion

Publication Date

4-2024

Abstract

The recommender system is a crucial component of today's online services. It helps users navigate through an overwhelmingly large number of items and discovering those that interest them. Unlike general recommender systems, which recommend items based on the user's overall preferences, sequential recommender systems consider the order of user-item interactions. Sequential recommendations aim to predict the next item a user will interact with, given a sequence of previously interacted items, while considering the short-term and long-term dependencies among items.

In this thesis, we focus on sequential recommendation methods: from representation learning to large language model (LLM)-based reasoning. On the one hand, representation learning-based sequential recommendation methods usually feed ID embeddings of interacted items into models, such as deep neural networks, to generate user representation vectors. They then rank candidate items to create a recommendation list based on the similarity between user representation vectors and candidate item vectors. On the other hand, the LLM-based reasoning approach mainly depends on the LLM's strong reasoning ability and rich world knowledge. LLM-based reasoners require carefully designed prompts and/or demonstration examples considering the task complexity and prompt length constraint.

This thesis consists of three parts. In the first part, we aim to improve representation learning for sequential recommendation and present our efforts in building an explanation-guided contrastive learning sequential recommendation model. In particular, we first present the data sparsity issue in the sequential recommendation and the false positive problem in contrastive learning. Next, we demonstrate how to utilize explanation methods for explanation-guided augmentation to enhance positive and negative views for contrastive learning-based sequential recommendation, thereby improving the learned representations.

Most sequential recommendation methods primarily focus on improving the quality of user representation. However, representation learning-based methods still suffer from several issues: 1) data sparsity; 2) difficulty adapting to unseen tasks; and 3) lack of world knowledge; 4) lack of human-style reasoning for generating explanations. To address these issues, the second part of this thesis investigates how we can build sequential recommendation models based on large language models. In particular, we introduce two new research directions for LLM-based sequential recommendation: 1) zero-shot LLM-based reasoning of recommended items and 2) few-shot LLM-based reasoning of recommended items. For zero-shot LLM-based reasoning of recommended items, we use an external module for generating candidate items to reduce the recommendation space and a 3-step prompting method for capturing user preferences and making ranked recommendations. For few-shot LLM-based reasoning of recommended items, we study what makes in-context learning work for sequential recommendation and propose incorporating multiple demonstrations into one aggregated demonstration to avoid the long input problem and improve recommendation accuracy. Both directions offers new and exciting research possibilities for using LLMs in recommender systems.

LLMs are generally capable of human-style reasoning which could be used to generate
explanations for a large set of tasks. Therefore, the final part of the thesis addresses the explanation generation task and the evaluation of explanation for sequential recommendation results using LLMs. Specifically, we introduce a framework for LLM-based explanation to support automatic evaluation of an LLM's ability to generate plausible post-hoc explanations from the content filtering and collaborative filtering perspectives. Using our created benchmark data, the experiment results show that ChatGPT with appropriate prompting can be a promising explainer for recommendation tasks.

Keywords

Sequential Recommendation, Large Language Model, Contrastive Learning, Explanation

Degree Awarded

PhD in Computer Science

Discipline

Computer Sciences

Supervisor(s)

LIM, Ee Peng

First Page

1

Last Page

155

Publisher

Singapore Management University

City or Country

Singapore

Copyright Owner and License

Author

Share

COinS