TY - GEN
T1 - Enhancing Dual-Target Cross-Domain Recommendation via Similar User Bridging
AU - Zhou, Qi
AU - Chen, Xi
AU - Fang, Chuyu
AU - Wang, Jianji
AU - Qin, Chuan
AU - Zhuang, Fuzhen
N1 - Publisher Copyright:
© 2025 Copyright held by the owner/author(s).
PY - 2025/11/10
Y1 - 2025/11/10
N2 - Dual-target cross-domain recommendation aims to mitigate data sparsity and enables mutual enhancement via bidirectional knowledge transfer. Most existing methods rely on overlapping users to build cross-domain connections. However, in many real-world scenarios, overlapping data is extremely limited-or even entirely absent-significantly diminishing the effectiveness of these methods. To address this challenge, we propose SUBCDR, a novel framework that leverages large language models (LLMs) to bridge similar users across domains, thereby enhancing dual-target cross-domain recommendation. Specifically, we introduce a Multi-Interests-Aware Prompt Learning mechanism that enables LLMs to generate comprehensive user profiles, disentangling domain-invariant interest points while capturing fine-grained preferences. Then, we construct intra-domain bipartite graphs from user-item interactions and an inter-domain heterogeneous graph that links similar users across domains. Subsequently, to facilitate effective knowledge transfer, we employ Graph Convolutional Networks (GCNs) for intra-domain relationship modeling and design an Inter-domain Hierarchical Attention Network (InterHAN) to facilitate inter-domain knowledge transfer through similar users, learning both shared and specific user representations. Extensive experiments on seven public datasets demonstrate that SUBCDR outperforms state-of-the-art cross-domain recommendation algorithms and single-domain recommendation methods. Our code is publicly available at https://github.com/97z/SUBCDR.git.
AB - Dual-target cross-domain recommendation aims to mitigate data sparsity and enables mutual enhancement via bidirectional knowledge transfer. Most existing methods rely on overlapping users to build cross-domain connections. However, in many real-world scenarios, overlapping data is extremely limited-or even entirely absent-significantly diminishing the effectiveness of these methods. To address this challenge, we propose SUBCDR, a novel framework that leverages large language models (LLMs) to bridge similar users across domains, thereby enhancing dual-target cross-domain recommendation. Specifically, we introduce a Multi-Interests-Aware Prompt Learning mechanism that enables LLMs to generate comprehensive user profiles, disentangling domain-invariant interest points while capturing fine-grained preferences. Then, we construct intra-domain bipartite graphs from user-item interactions and an inter-domain heterogeneous graph that links similar users across domains. Subsequently, to facilitate effective knowledge transfer, we employ Graph Convolutional Networks (GCNs) for intra-domain relationship modeling and design an Inter-domain Hierarchical Attention Network (InterHAN) to facilitate inter-domain knowledge transfer through similar users, learning both shared and specific user representations. Extensive experiments on seven public datasets demonstrate that SUBCDR outperforms state-of-the-art cross-domain recommendation algorithms and single-domain recommendation methods. Our code is publicly available at https://github.com/97z/SUBCDR.git.
KW - dual-target cross-domain recommendation
KW - large language model
KW - similar user bridging
UR - https://www.scopus.com/pages/publications/105023200667
U2 - 10.1145/3746252.3761356
DO - 10.1145/3746252.3761356
M3 - 会议稿件
AN - SCOPUS:105023200667
T3 - CIKM 2025 - Proceedings of the 34th ACM International Conference on Information and Knowledge Management
SP - 4487
EP - 4497
BT - CIKM 2025 - Proceedings of the 34th ACM International Conference on Information and Knowledge Management
PB - Association for Computing Machinery, Inc
T2 - 34th ACM International Conference on Information and Knowledge Management, CIKM 2025
Y2 - 10 November 2025 through 14 November 2025
ER -