TY - JOUR
T1 - Multi-Stage Asynchronous Federated Learning with Adaptive Differential Privacy
AU - Li, Yanan
AU - Yang, Shusen
AU - Ren, Xuebin
AU - Shi, Liang
AU - Zhao, Cong
N1 - Publisher Copyright:
© 1979-2012 IEEE.
PY - 2024/2/1
Y1 - 2024/2/1
N2 - The fusion of federated learning and differential privacy can provide more comprehensive and rigorous privacy protection, thus attracting extensive interests from both academia and industry. However, facing the system-level challenge of device heterogeneity, most current synchronous FL paradigms exhibit low efficiency due to the straggler effect, which can be significantly reduced by Asynchronous FL (AFL). However, AFL has never been comprehensively studied, which imposes a major challenge in the utility optimization of DP-enhanced AFL. Here, theoretically motivated multi-stage adaptive private algorithms are proposed to improve the trade-off between model utility and privacy for DP-enhanced AFL. In particular, we first build two DP-enhanced AFL frameworks with consideration of universal factors for different adversary models. Then, we give a solid analysis on the model convergence of AFL, based on which, DP can be adaptively achieved with high utility. Through extensive experiments on different training models and benchmark datasets, we demonstrate that the proposed algorithms achieve the overall best performances and improve up to 24% test accuracy with the same privacy loss and have faster convergence compared with the state-of-the-art algorithms. Our frameworks provide an analytical way for private AFL and adapt to more complex FL application scenarios.
AB - The fusion of federated learning and differential privacy can provide more comprehensive and rigorous privacy protection, thus attracting extensive interests from both academia and industry. However, facing the system-level challenge of device heterogeneity, most current synchronous FL paradigms exhibit low efficiency due to the straggler effect, which can be significantly reduced by Asynchronous FL (AFL). However, AFL has never been comprehensively studied, which imposes a major challenge in the utility optimization of DP-enhanced AFL. Here, theoretically motivated multi-stage adaptive private algorithms are proposed to improve the trade-off between model utility and privacy for DP-enhanced AFL. In particular, we first build two DP-enhanced AFL frameworks with consideration of universal factors for different adversary models. Then, we give a solid analysis on the model convergence of AFL, based on which, DP can be adaptively achieved with high utility. Through extensive experiments on different training models and benchmark datasets, we demonstrate that the proposed algorithms achieve the overall best performances and improve up to 24% test accuracy with the same privacy loss and have faster convergence compared with the state-of-the-art algorithms. Our frameworks provide an analytical way for private AFL and adapt to more complex FL application scenarios.
KW - Asynchronous learning
KW - convergence
KW - differential privacy
KW - federated learning
UR - https://www.scopus.com/pages/publications/85177086387
U2 - 10.1109/TPAMI.2023.3332428
DO - 10.1109/TPAMI.2023.3332428
M3 - 文章
C2 - 37956007
AN - SCOPUS:85177086387
SN - 0162-8828
VL - 46
SP - 1243
EP - 1256
JO - IEEE Transactions on Pattern Analysis and Machine Intelligence
JF - IEEE Transactions on Pattern Analysis and Machine Intelligence
IS - 2
ER -