TY - JOUR
T1 - Deep Class-Incremental Learning From Decentralized Data
AU - Zhang, Xiaohan
AU - Dong, Songlin
AU - Chen, Jinjie
AU - Tian, Qi
AU - Gong, Yihong
AU - Hong, Xiaopeng
N1 - Publisher Copyright:
© 2012 IEEE.
PY - 2024/5/1
Y1 - 2024/5/1
N2 - In this article, we focus on a new and challenging decentralized machine learning paradigm in which there are continuous inflows of data to be addressed and the data are stored in multiple repositories. We initiate the study of data-decentralized class-incremental learning (DCIL) by making the following contributions. First, we formulate the DCIL problem and develop the experimental protocol. Second, we introduce a paradigm to create a basic decentralized counterpart of typical (centralized) CIL approaches, and as a result, establish a benchmark for the DCIL study. Third, we further propose a decentralized composite knowledge incremental distillation (DCID) framework to transfer knowledge from historical models and multiple local sites to the general model continually. DCID consists of three main components, namely, local CIL, collaborated knowledge distillation (KD) among local models, and aggregated KD from local models to the general one. We comprehensively investigate our DCID framework by using a different implementation of the three components. Extensive experimental results demonstrate the effectiveness of our DCID framework. The source code of the baseline methods and the proposed DCIL is available at https://github.com/Vision-Intelligence-and-Robots-Group/DCIL.
AB - In this article, we focus on a new and challenging decentralized machine learning paradigm in which there are continuous inflows of data to be addressed and the data are stored in multiple repositories. We initiate the study of data-decentralized class-incremental learning (DCIL) by making the following contributions. First, we formulate the DCIL problem and develop the experimental protocol. Second, we introduce a paradigm to create a basic decentralized counterpart of typical (centralized) CIL approaches, and as a result, establish a benchmark for the DCIL study. Third, we further propose a decentralized composite knowledge incremental distillation (DCID) framework to transfer knowledge from historical models and multiple local sites to the general model continually. DCID consists of three main components, namely, local CIL, collaborated knowledge distillation (KD) among local models, and aggregated KD from local models to the general one. We comprehensively investigate our DCID framework by using a different implementation of the three components. Extensive experimental results demonstrate the effectiveness of our DCID framework. The source code of the baseline methods and the proposed DCIL is available at https://github.com/Vision-Intelligence-and-Robots-Group/DCIL.
KW - Catastrophic forgetting
KW - continuous learning
KW - incremental learning (IL)
KW - knowledge distillation (KD)
UR - https://www.scopus.com/pages/publications/85141605929
U2 - 10.1109/TNNLS.2022.3214573
DO - 10.1109/TNNLS.2022.3214573
M3 - 文章
C2 - 36315536
AN - SCOPUS:85141605929
SN - 2162-237X
VL - 35
SP - 7190
EP - 7203
JO - IEEE Transactions on Neural Networks and Learning Systems
JF - IEEE Transactions on Neural Networks and Learning Systems
IS - 5
ER -