Publication Type

Conference Proceeding Article

Version

publishedVersion

Publication Date

8-2020

Abstract

Sequential user behavior modeling plays a crucial role in online user-oriented services, such as product purchasing, news feed consumption, and online advertising. The performance of sequential modeling heavily depends on the scale and quality of historical behaviors. However, the number of user behaviors inherently follows a long-tailed distribution, which has been seldom explored. In this work, we argue that focusing on tail users could bring more benefits and address the long tails issue by learning transferrable parameters from both optimization and feature perspectives. Specifically, we propose a gradient alignment optimizer and adopt an adversarial training scheme to facilitate knowledge transfer from the head to the tail. Such methods can also deal with the cold-start problem of new users. Moreover, it could be directly adaptive to various well-established sequential models. Extensive experiments on four real-world datasets verify the superiority of our framework compared with the state-of-the-art baselines.

Keywords

Sequential User Behavior Modeling, Long-tailed Distribution, Gradient Alignment, Adversarial Training

Discipline

Databases and Information Systems | Data Science | Data Storage Systems

Research Areas

Data Science and Engineering

Publication

KDD '20: Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, San Diego, CA, August 22-27

First Page

359

Last Page

367

ISBN

9781450379984

Identifier

10.1145/3394486.3403078

Publisher

ACM

City or Country

New York

Embargo Period

4-4-2021

Copyright Owner and License

Publisher

Additional URL

https://doi.org/10.1145/3394486.3403078

Share

COinS