TY - JOUR
T1 - Variational Label Enhancement
AU - Xu, Ning
AU - Shu, Jun
AU - Zheng, Renyi
AU - Geng, Xin
AU - Meng, Deyu
AU - Zhang, Min Ling
N1 - Publisher Copyright:
© 1979-2012 IEEE.
PY - 2023/5/1
Y1 - 2023/5/1
N2 - Multi-label learning focuses on the ambiguity at the label side, i.e., one instance is associated with multiple class labels, where the logical labels are always adopted to partition class labels into relevant labels and irrelevant labels rigidly. However, the relevance or irrelevance of each label corresponding to one instance is essentially relative in real-world tasks and the label distribution is more fine-grained than the logical labels by denoting one instance with a certain number of the description degrees of all class labels. As the label distribution is not explicitly available in most training sets, a process named label enhancement emerges to recover the label distributions in training datasets. By inducing the generative model of the label distribution and adopting the variational inference technique, the approximate posterior density of the label distributions should maximize the variational lower bound. Following the above consideration, LEVI is proposed to recover the label distributions from the training examples. In addition, the multi-label predictive model is induced for multi-label learning by leveraging the recovered label distributions along with a specialized objective function. The recovery experiments on fourteen label distribution datasets and the predictive experiments on fourteen multi-label learning datasets validate the advantage of our approach over the state-of-the-art approaches.
AB - Multi-label learning focuses on the ambiguity at the label side, i.e., one instance is associated with multiple class labels, where the logical labels are always adopted to partition class labels into relevant labels and irrelevant labels rigidly. However, the relevance or irrelevance of each label corresponding to one instance is essentially relative in real-world tasks and the label distribution is more fine-grained than the logical labels by denoting one instance with a certain number of the description degrees of all class labels. As the label distribution is not explicitly available in most training sets, a process named label enhancement emerges to recover the label distributions in training datasets. By inducing the generative model of the label distribution and adopting the variational inference technique, the approximate posterior density of the label distributions should maximize the variational lower bound. Following the above consideration, LEVI is proposed to recover the label distributions from the training examples. In addition, the multi-label predictive model is induced for multi-label learning by leveraging the recovered label distributions along with a specialized objective function. The recovery experiments on fourteen label distribution datasets and the predictive experiments on fourteen multi-label learning datasets validate the advantage of our approach over the state-of-the-art approaches.
KW - Label enhancement
KW - label ambiguity
KW - label distribution learning
KW - multi-label learning
UR - https://www.scopus.com/pages/publications/85137551011
U2 - 10.1109/TPAMI.2022.3203678
DO - 10.1109/TPAMI.2022.3203678
M3 - 文章
C2 - 36054401
AN - SCOPUS:85137551011
SN - 0162-8828
VL - 45
SP - 6537
EP - 6551
JO - IEEE Transactions on Pattern Analysis and Machine Intelligence
JF - IEEE Transactions on Pattern Analysis and Machine Intelligence
IS - 5
ER -