TY - GEN
T1 - MDP
T2 - 14th IEEE International Conference on Joint Cloud Computing, JCC 2023
AU - Xu, Wanghan
AU - Shi, Bin
AU - Zhang, Jiqiang
AU - Feng, Zhiyuan
AU - Pan, Tianze
AU - Dong, Bo
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - In recent years, graph neural networks (GNN) have developed rapidly in various fields, but the high computational consumption of its model training often discourages some graph owners who want to train GNN models but lack computing power. Therefore, these data owners often cooperate with external calculators during the model training process, which will raise critical severe privacy concerns. Protecting private information in graph, however, is difficult due to the complex graph structure consisting of node features and edges. To solve this problem, we propose a new privacy-preserving GNN named MDP based on matrix decomposition and differential privacy (DP), which allows external calculators train GNN models without knowing the original data. Specifically, we first introduce the concept of topological secret sharing (TSS), and design a novel matrix decomposition method named eigenvalue selection (ES) according to TSS, which can preserve the message passing ability of adjacency matrix while hiding edge information. We evaluate the feasibility and performance of our model through extensive experiments, which demonstrates that MDP model achieves accuracy comparable to the original model, with practically affordable overhead.
AB - In recent years, graph neural networks (GNN) have developed rapidly in various fields, but the high computational consumption of its model training often discourages some graph owners who want to train GNN models but lack computing power. Therefore, these data owners often cooperate with external calculators during the model training process, which will raise critical severe privacy concerns. Protecting private information in graph, however, is difficult due to the complex graph structure consisting of node features and edges. To solve this problem, we propose a new privacy-preserving GNN named MDP based on matrix decomposition and differential privacy (DP), which allows external calculators train GNN models without knowing the original data. Specifically, we first introduce the concept of topological secret sharing (TSS), and design a novel matrix decomposition method named eigenvalue selection (ES) according to TSS, which can preserve the message passing ability of adjacency matrix while hiding edge information. We evaluate the feasibility and performance of our model through extensive experiments, which demonstrates that MDP model achieves accuracy comparable to the original model, with practically affordable overhead.
KW - distributed machine learning
KW - matrix decomposition
KW - privacy-preserving
KW - topological secret sharing
UR - https://www.scopus.com/pages/publications/85172163693
U2 - 10.1109/JCC59055.2023.00011
DO - 10.1109/JCC59055.2023.00011
M3 - 会议稿件
AN - SCOPUS:85172163693
T3 - Proceedings - 2023 IEEE 14th International Conference on Joint Cloud Computing, JCC 2023
SP - 38
EP - 45
BT - Proceedings - 2023 IEEE 14th International Conference on Joint Cloud Computing, JCC 2023
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 17 July 2023 through 20 July 2023
ER -