TY - JOUR
T1 - High-efficient hierarchical federated learning on non-IID data with progressive collaboration
AU - Cai, Yunyun
AU - Xi, Wei
AU - Shen, Yuhao
AU - Peng, Youcheng
AU - Song, Shixuan
AU - Zhao, Jizhong
N1 - Publisher Copyright:
© 2022 Elsevier B.V.
PY - 2022/12
Y1 - 2022/12
N2 - Hierarchical federated learning (HFL) allows multiple edge aggregations at edge devices before one global aggregation to address both issues of non-independent and identically distributed (non-IID) data and communication bottleneck in federated learning (FL). To solve the non-IID issue, most HFL algorithms assume that clients can be assigned to any edge device. In practice, however, these assumptions are always unrealistic. In this paper, we propose a high-efficient HFL algorithm, named FedPEC, which introduces progressive edge collaboration rather than unrealistic client allocation. FedPEC estimates the initial number of collaborators based on our proved convergence upper bound, and then constantly adjusts the estimated number of collaborators according to the characteristics of each stage in the following rounds. Guided by the estimated number of collaborators, each edge device can be assigned an appropriate collaborator set based on an adaptive similarity threshold. Extensive experiments are conducted to investigate FedPEC in terms of accuracy, loss, and convergence speed with various data sets. Our experimental results demonstrate that FedPEC can significantly outperform state-of-the-art FL algorithms.
AB - Hierarchical federated learning (HFL) allows multiple edge aggregations at edge devices before one global aggregation to address both issues of non-independent and identically distributed (non-IID) data and communication bottleneck in federated learning (FL). To solve the non-IID issue, most HFL algorithms assume that clients can be assigned to any edge device. In practice, however, these assumptions are always unrealistic. In this paper, we propose a high-efficient HFL algorithm, named FedPEC, which introduces progressive edge collaboration rather than unrealistic client allocation. FedPEC estimates the initial number of collaborators based on our proved convergence upper bound, and then constantly adjusts the estimated number of collaborators according to the characteristics of each stage in the following rounds. Guided by the estimated number of collaborators, each edge device can be assigned an appropriate collaborator set based on an adaptive similarity threshold. Extensive experiments are conducted to investigate FedPEC in terms of accuracy, loss, and convergence speed with various data sets. Our experimental results demonstrate that FedPEC can significantly outperform state-of-the-art FL algorithms.
KW - Edge collaboration
KW - Federated learning
KW - Hierarchical architecture
KW - Model training efficiency
KW - Non-IID data
KW - Optimization
UR - https://www.scopus.com/pages/publications/85134892570
U2 - 10.1016/j.future.2022.07.010
DO - 10.1016/j.future.2022.07.010
M3 - 文章
AN - SCOPUS:85134892570
SN - 0167-739X
VL - 137
SP - 111
EP - 128
JO - Future Generation Computer Systems
JF - Future Generation Computer Systems
ER -