TY - JOUR
T1 - Combining Non-sampling and Self-attention for Sequential Recommendation
AU - Chen, Guangjin
AU - Zhao, Guoshuai
AU - Zhu, Li
AU - Zhuo, Zhimin
AU - Qian, Xueming
N1 - Publisher Copyright:
© 2022 Elsevier Ltd
PY - 2022/3
Y1 - 2022/3
N2 - With the rapid development of social media and big data technology, user's sequence behavior information can be well recorded and preserved on different media platforms. It is crucial to model the user preference through mining their sequential behaviors. The goal of sequential recommendation is to predict what a user may interact with in the next moment based on the user's historical record of interactive sequence. However, existing sequential recommendation methods generally adopt a negative sampling mechanism (e.g. random and uniform sampling) for the pairwise learning, which brings the defect of insufficient training to the model, and decrease the evaluation performance of the entire model. Therefore, we propose a Non-sampling Self-attentive Sequential Recommendation (NSSR) model that combines non-sampling mechanism and self-attention mechanism. Under the premise of ensuring the efficient training of the model, NSSR model takes all pairs in the training set as training samples, so as to achieve the goal of fully training the model. Specifically, we take the interactive sequence as the current user representation, and propose a new loss function to implement the non-sampling training mechanism. Finally, the state-of-the-art result is achieved on three public datasets, Movielens-1M, Amazon Beauty and Foursquare_TKY, and the recommendation performance increase by about 29.3%, 25.7% and 42.1% respectively.
AB - With the rapid development of social media and big data technology, user's sequence behavior information can be well recorded and preserved on different media platforms. It is crucial to model the user preference through mining their sequential behaviors. The goal of sequential recommendation is to predict what a user may interact with in the next moment based on the user's historical record of interactive sequence. However, existing sequential recommendation methods generally adopt a negative sampling mechanism (e.g. random and uniform sampling) for the pairwise learning, which brings the defect of insufficient training to the model, and decrease the evaluation performance of the entire model. Therefore, we propose a Non-sampling Self-attentive Sequential Recommendation (NSSR) model that combines non-sampling mechanism and self-attention mechanism. Under the premise of ensuring the efficient training of the model, NSSR model takes all pairs in the training set as training samples, so as to achieve the goal of fully training the model. Specifically, we take the interactive sequence as the current user representation, and propose a new loss function to implement the non-sampling training mechanism. Finally, the state-of-the-art result is achieved on three public datasets, Movielens-1M, Amazon Beauty and Foursquare_TKY, and the recommendation performance increase by about 29.3%, 25.7% and 42.1% respectively.
KW - Non-sampling mechanism
KW - Self-attention
KW - Sequential recommendation
KW - User preference modeling
UR - https://www.scopus.com/pages/publications/85122648131
U2 - 10.1016/j.ipm.2021.102814
DO - 10.1016/j.ipm.2021.102814
M3 - 文章
AN - SCOPUS:85122648131
SN - 0306-4573
VL - 59
JO - Information Processing and Management
JF - Information Processing and Management
IS - 2
M1 - 102814
ER -