TY - JOUR
T1 - LSAB
T2 - User Behavioral Pattern Modeling in Sequential Recommendation by Learning Self-Attention Bias
AU - Han, Di
AU - Huang, Yifan
AU - Liu, Junmin
AU - Liao, Kai
AU - Lin, Kunling
N1 - Publisher Copyright:
© 2024 Copyright held by the owner/author(s). Publication rights licensed to ACM.
PY - 2024/1/13
Y1 - 2024/1/13
N2 - Since the weight of a self-attention model is not affected by the sequence interval, it can more accurately and completely describe the user interests, so it is widely used in processing sequential recommendation. However, the mainstream self-attention model focuses on the similarity between items when calculating the attention weight of user behavioral patterns but fails to reflect the impact of user sudden drift decisions on the model in time. In this article, we introduce a bias strategy in the self-attention module, referred to as Learning Self-Attention Bias (LSAB) to more accurately learn the fast-changing user behavioral patterns. The introduction of LSAB allows for the adjustment of bias resulting from self-attention weights, leading to enhanced prediction performance in sequential recommendation. In addition, this article designs four attention-weight bias types catering to diverse user behavior preferences. After testing on the benchmark datasets, each bias strategy in LSAB is useful for state-of-the-art and can improve the performance of the models by nearly 5% on average. The source code listing is publicly available at https://gitee.com/kyle-liao/lsab.
AB - Since the weight of a self-attention model is not affected by the sequence interval, it can more accurately and completely describe the user interests, so it is widely used in processing sequential recommendation. However, the mainstream self-attention model focuses on the similarity between items when calculating the attention weight of user behavioral patterns but fails to reflect the impact of user sudden drift decisions on the model in time. In this article, we introduce a bias strategy in the self-attention module, referred to as Learning Self-Attention Bias (LSAB) to more accurately learn the fast-changing user behavioral patterns. The introduction of LSAB allows for the adjustment of bias resulting from self-attention weights, leading to enhanced prediction performance in sequential recommendation. In addition, this article designs four attention-weight bias types catering to diverse user behavior preferences. After testing on the benchmark datasets, each bias strategy in LSAB is useful for state-of-the-art and can improve the performance of the models by nearly 5% on average. The source code listing is publicly available at https://gitee.com/kyle-liao/lsab.
KW - Additional Key Words and PhrasesSequential recommendation
KW - Behavioral pattern
KW - Bias
KW - Self-attention model
UR - https://www.scopus.com/pages/publications/85182589871
U2 - 10.1145/3632625
DO - 10.1145/3632625
M3 - 文章
AN - SCOPUS:85182589871
SN - 1556-4681
VL - 18
JO - ACM Transactions on Knowledge Discovery from Data
JF - ACM Transactions on Knowledge Discovery from Data
IS - 3
M1 - 62
ER -