TY - JOUR
T1 - Differentiable neural architecture search augmented with pruning and multi-objective optimization for time-efficient intelligent fault diagnosis of machinery
AU - Zhang, Kaiyu
AU - Chen, Jinglong
AU - He, Shuilong
AU - Xu, Enyong
AU - Li, Fudong
AU - Zhou, Zitong
N1 - Publisher Copyright:
© 2021 Elsevier Ltd
PY - 2021/9
Y1 - 2021/9
N2 - Intelligent fault diagnosis, which is mainly based on neural network, has been widely used in machinery monitoring. Although such deep learning methods are effective, the new architectures are mainly handcrafted by series of experiments that require ample time and substantial efforts. To automate process of building neural networks and save designing time, a novel differentiable neural architecture search method is proposed. By gradually reducing candidate operations while retaining trained parameters during pruning, computation consumed by each stage of neural architecture search is decreased, which accelerates search process. To improve inferential efficiency of subnetworks, specially designed penalty terms are introduced into the objective function for searching optimal numbers of layers and nodes, which can reduce complexity of subnetworks and save calculation time of signal analysis. In addition, exclusive competition between candidate operations is broken by changing discretization and selection methods of operations, which provides a basis for channel fusion. Effectiveness of the proposed method is verified by two datasets. Experiments show that this method can generate subnetworks of lower complexity and less computational cost than other state-of-art neural architecture search techniques, while achieving competitive result.
AB - Intelligent fault diagnosis, which is mainly based on neural network, has been widely used in machinery monitoring. Although such deep learning methods are effective, the new architectures are mainly handcrafted by series of experiments that require ample time and substantial efforts. To automate process of building neural networks and save designing time, a novel differentiable neural architecture search method is proposed. By gradually reducing candidate operations while retaining trained parameters during pruning, computation consumed by each stage of neural architecture search is decreased, which accelerates search process. To improve inferential efficiency of subnetworks, specially designed penalty terms are introduced into the objective function for searching optimal numbers of layers and nodes, which can reduce complexity of subnetworks and save calculation time of signal analysis. In addition, exclusive competition between candidate operations is broken by changing discretization and selection methods of operations, which provides a basis for channel fusion. Effectiveness of the proposed method is verified by two datasets. Experiments show that this method can generate subnetworks of lower complexity and less computational cost than other state-of-art neural architecture search techniques, while achieving competitive result.
KW - Deep learning
KW - Multi-objective optimization
KW - Network pruning
KW - Neural architecture search
KW - Rolling bearing
UR - https://www.scopus.com/pages/publications/85102030450
U2 - 10.1016/j.ymssp.2021.107773
DO - 10.1016/j.ymssp.2021.107773
M3 - 文章
AN - SCOPUS:85102030450
SN - 0888-3270
VL - 158
JO - Mechanical Systems and Signal Processing
JF - Mechanical Systems and Signal Processing
M1 - 107773
ER -