TY - JOUR
T1 - SPLBoost
T2 - An Improved Robust Boosting Algorithm Based on Self-Paced Learning
AU - Wang, Kaidong
AU - Wang, Yao
AU - Zhao, Qian
AU - Meng, Deyu
AU - Liao, Xiuwu
AU - Xu, Zongben
N1 - Publisher Copyright:
© 2013 IEEE.
PY - 2021/3
Y1 - 2021/3
N2 - It is known that boosting can be interpreted as an optimization technique to minimize an underlying loss function. Specifically, the underlying loss being minimized by the traditional AdaBoost is the exponential loss, which proves to be very sensitive to random noise/outliers. Therefore, several boosting algorithms, e.g., LogitBoost and SavageBoost, have been proposed to improve the robustness of AdaBoost by replacing the exponential loss with some designed robust loss functions. In this article, we present a new way to robustify AdaBoost, that is, incorporating the robust learning idea of self-paced learning (SPL) into the boosting framework. Specifically, we design a new robust boosting algorithm based on the SPL regime, that is, SPLBoost, which can be easily implemented by slightly modifying off-the-shelf boosting packages. Extensive experiments and a theoretical characterization are also carried out to illustrate the merits of the proposed SPLBoost.
AB - It is known that boosting can be interpreted as an optimization technique to minimize an underlying loss function. Specifically, the underlying loss being minimized by the traditional AdaBoost is the exponential loss, which proves to be very sensitive to random noise/outliers. Therefore, several boosting algorithms, e.g., LogitBoost and SavageBoost, have been proposed to improve the robustness of AdaBoost by replacing the exponential loss with some designed robust loss functions. In this article, we present a new way to robustify AdaBoost, that is, incorporating the robust learning idea of self-paced learning (SPL) into the boosting framework. Specifically, we design a new robust boosting algorithm based on the SPL regime, that is, SPLBoost, which can be easily implemented by slightly modifying off-the-shelf boosting packages. Extensive experiments and a theoretical characterization are also carried out to illustrate the merits of the proposed SPLBoost.
KW - AdaBoost
KW - loss function
KW - majorization minimization
KW - robustness
KW - self-paced learning (SPL)
UR - https://www.scopus.com/pages/publications/85077284697
U2 - 10.1109/TCYB.2019.2957101
DO - 10.1109/TCYB.2019.2957101
M3 - 文章
C2 - 31880577
AN - SCOPUS:85077284697
SN - 2168-2267
VL - 51
SP - 1556
EP - 1570
JO - IEEE Transactions on Cybernetics
JF - IEEE Transactions on Cybernetics
IS - 3
M1 - 8943296
ER -