TY - JOUR
T1 - CEAT
T2 - Continual Expansion and Absorption Transformer for Non-Exemplar Class-Incremental Learning
AU - Dong, Songlin
AU - Gao, Xinyuan
AU - He, Yuhang
AU - Zhou, Zhengdong
AU - Kot, Alex C.
AU - Gong, Yihong
N1 - Publisher Copyright:
©1991-2012 IEEE.
PY - 2025
Y1 - 2025
N2 - In dynamic real-world scenarios, continuous learning without forgetting old knowledge is essential, particularly in environments with stricter privacy protection or resource-constrained edge devices where storing old exemplars is infeasible. Therefore, Non-Exemplar Class-Incremental Learning (NECIL) has garnered significant attention. Compared with normal settings, it faces a more severe plasticity-stability dilemma and classifier bias. To address those challenges, we propose a framework based on the vision transformer architecture, called the Continual Expansion and Absorption Transformer (CEAT), which consists of two core components. First, we propose the Continual Expansion and Absorption (CEA) method to alleviate the trade-off between new and old classes by parallelly expanding a set of parameters (i.e. EF layer) on the backbone to learn new tasks, while freezing the backbone to retain old task knowledge. The EF layers can be seamlessly absorbed into the ViT backbone through parameter recombination before inference, mitigating storage and computational burdens. Second, we propose a Dynamic Boundary-Aware (DBA) method to generate dynamic pseudo-features for classifier calibration to address the classifier bias. Extensive experiments demonstrate that our approach achieves state-of-the-art performance, particularly showcasing significant improvements of 4.82% and 5.92% on TinyImageNet and ImageNet-Subset, respectively.
AB - In dynamic real-world scenarios, continuous learning without forgetting old knowledge is essential, particularly in environments with stricter privacy protection or resource-constrained edge devices where storing old exemplars is infeasible. Therefore, Non-Exemplar Class-Incremental Learning (NECIL) has garnered significant attention. Compared with normal settings, it faces a more severe plasticity-stability dilemma and classifier bias. To address those challenges, we propose a framework based on the vision transformer architecture, called the Continual Expansion and Absorption Transformer (CEAT), which consists of two core components. First, we propose the Continual Expansion and Absorption (CEA) method to alleviate the trade-off between new and old classes by parallelly expanding a set of parameters (i.e. EF layer) on the backbone to learn new tasks, while freezing the backbone to retain old task knowledge. The EF layers can be seamlessly absorbed into the ViT backbone through parameter recombination before inference, mitigating storage and computational burdens. Second, we propose a Dynamic Boundary-Aware (DBA) method to generate dynamic pseudo-features for classifier calibration to address the classifier bias. Extensive experiments demonstrate that our approach achieves state-of-the-art performance, particularly showcasing significant improvements of 4.82% and 5.92% on TinyImageNet and ImageNet-Subset, respectively.
KW - Class-incremental learning
KW - continual expansion and absorption
KW - dynamic boundary-aware
KW - non-exemplar
UR - https://www.scopus.com/pages/publications/105002311129
U2 - 10.1109/TCSVT.2024.3502837
DO - 10.1109/TCSVT.2024.3502837
M3 - 文章
AN - SCOPUS:105002311129
SN - 1051-8215
VL - 35
SP - 3146
EP - 3159
JO - IEEE Transactions on Circuits and Systems for Video Technology
JF - IEEE Transactions on Circuits and Systems for Video Technology
IS - 4
ER -