TY - GEN
T1 - Projection vector machine
T2 - 2010 6th IEEE World Congress on Computational Intelligence, WCCI 2010 - 2010 International Joint Conference on Neural Networks, IJCNN 2010
AU - Deng, Wanyu
AU - Zheng, Qinghua
AU - Lian, Shiguo
AU - Chen, Lin
AU - Wang, Xin
PY - 2010
Y1 - 2010
N2 - The presence of fewer samples and large number of input features increases the complexity of the classifier and degrades the stability. Thus, dimension reduction was always carried before supervised learning algorithms such as neural network. This two-stage framework is somewhat redundant in dimension reduction and network training. This paper proposes a novel one-stage learning algorithm for high-dimension small-sample data, called Projection Vector Machine (PVM), which combines dimension reduction with network training and removes the redundancy. Through dimension reduction operation such as singular vector decomposition (SVD), we not only reduce the dimension but also obtain the size of single-hidden layer feedforward neural network (SLFN) and input weight values simultaneously. This size-fixed network will become linear programming system and thus the output weights can be determined by simple least square method. Unlike traditional backpropagation feedforward neural network (BP), parameters in PVM don't need iterative tuning and thus its training speed is much faster than BP. Unlike extreme learning machine (ELM) proposed by Huang [G.-B. Huang, Q.-Y. Zhu, C.-K. Siew, Extreme learning machine: theory and applications, Neurocomputing 70 (2006) 489-501] which assigns input weights randomly, PVM's input weights are ranked by singular values and select the optimal weights order by singular value. We give proof that PVM is a universal approximator for high-dimension small-sample data. Experimental results show that the proposed one-stage algorithm PVM is faster than two-stage learning approach such as SVD+BP and SVD+ELM.
AB - The presence of fewer samples and large number of input features increases the complexity of the classifier and degrades the stability. Thus, dimension reduction was always carried before supervised learning algorithms such as neural network. This two-stage framework is somewhat redundant in dimension reduction and network training. This paper proposes a novel one-stage learning algorithm for high-dimension small-sample data, called Projection Vector Machine (PVM), which combines dimension reduction with network training and removes the redundancy. Through dimension reduction operation such as singular vector decomposition (SVD), we not only reduce the dimension but also obtain the size of single-hidden layer feedforward neural network (SLFN) and input weight values simultaneously. This size-fixed network will become linear programming system and thus the output weights can be determined by simple least square method. Unlike traditional backpropagation feedforward neural network (BP), parameters in PVM don't need iterative tuning and thus its training speed is much faster than BP. Unlike extreme learning machine (ELM) proposed by Huang [G.-B. Huang, Q.-Y. Zhu, C.-K. Siew, Extreme learning machine: theory and applications, Neurocomputing 70 (2006) 489-501] which assigns input weights randomly, PVM's input weights are ranked by singular values and select the optimal weights order by singular value. We give proof that PVM is a universal approximator for high-dimension small-sample data. Experimental results show that the proposed one-stage algorithm PVM is faster than two-stage learning approach such as SVD+BP and SVD+ELM.
KW - Extreme Learning Machine
KW - Neural network
KW - Projection Vector Machine
KW - Singular vector decomposition
UR - https://www.scopus.com/pages/publications/79959418027
U2 - 10.1109/IJCNN.2010.5596571
DO - 10.1109/IJCNN.2010.5596571
M3 - 会议稿件
AN - SCOPUS:79959418027
SN - 9781424469178
T3 - Proceedings of the International Joint Conference on Neural Networks
BT - 2010 IEEE World Congress on Computational Intelligence, WCCI 2010 - 2010 International Joint Conference on Neural Networks, IJCNN 2010
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 18 July 2010 through 23 July 2010
ER -