Publication Type

Conference Proceeding Article

Version

acceptedVersion

Publication Date

5-2024

Abstract

Recommender systems significantly impact user experience across diverse domains, yet existing frameworks often prioritize offline evaluation metrics, neglecting the crucial integration of A/B testing for forward-looking assessments. In response, this paper introduces a new framework seamlessly incorporating A/B testing into the Cornac recommendation library. Leveraging a diverse collection of model implementations in Cornac, our framework enables effortless A/B testing experiment setup from offline trained models. We introduce a carefully designed dashboard and a robust backend for efficient logging and analysis of user feedback. This not only streamlines the A/B testing process but also enhances the evaluation of recommendation models in an online environment. Demonstrating the simplicity of on-demand online model evaluations, our work contributes to advancing recommender system evaluation methodologies, underscoring the significance of A/B testing and providing a practical framework for implementation. The framework is open-sourced at https://github.com/PreferredAI/cornac-ab.

Keywords

Recommender systems, Collaborative filtering, Recommendation library, A/B testing, open-source framework

Discipline

Artificial Intelligence and Robotics | Numerical Analysis and Scientific Computing

Research Areas

Data Science and Engineering

Publication

Proceedings of the ACM Web Conference 2024 (WWW 2024) : Singapore, May 13-17

First Page

1027

Last Page

1030

Identifier

10.1145/3589335.3651241

Publisher

Association for Computing Machinery

City or Country

Singapore

Comments

pdf provided by faculty

Additional URL

https://doi.org/10.1145/3589335.3651241

Share

COinS