TY - JOUR
T1 - Study on interpretability of artificial neural network models for dynamic load identification
AU - Yang, Fengfan
AU - Luo, Yajun
AU - Du, Longfei
AU - Zhang, Yahong
AU - Xie, Shilin
N1 - Publisher Copyright:
© 2025 Elsevier Ltd
PY - 2025/6/30
Y1 - 2025/6/30
N2 - In recent years, artificial neural networks (ANNs) have been extensively utilized for dynamic load identification due to their remarkable capability to approximate the complex relationships between external dynamic loads and corresponding responses. While ANN-based methods offer high accuracy in load identification, the over-parameterized black-box nature raises concerns about their reliability, which hampers their broader adoption in engineering applications. This paper addresses the interpretability of three typical ANN models—namely, the radial basis function neural network (RBFNN), the deep convolutional neural network (CNN), and the deep recurrent neural network (RNN)—in the context of dynamic load identification. We focus on a double-clamped beam as the test case for dynamic load identification, assessing the trustworthiness of these ANN models through analyses of their generalization ability, causality, and robustness. To this end, we propose two novel interpretation algorithms, one operating in the frequency domain and the other in the time domain, to elucidate the causality and robustness of the ANN models, respectively. A systematic framework for evaluating the interpretability of ANN models in dynamic load identification is established, and a comprehensive interpretability analysis of the three models is conducted. Our findings reveal that the three ANN models exhibit varying levels of trustworthiness, with the RNN model demonstrating the highest degree of interpretability. This conclusion is further supported by the interpretability study conducted on mathematical models.
AB - In recent years, artificial neural networks (ANNs) have been extensively utilized for dynamic load identification due to their remarkable capability to approximate the complex relationships between external dynamic loads and corresponding responses. While ANN-based methods offer high accuracy in load identification, the over-parameterized black-box nature raises concerns about their reliability, which hampers their broader adoption in engineering applications. This paper addresses the interpretability of three typical ANN models—namely, the radial basis function neural network (RBFNN), the deep convolutional neural network (CNN), and the deep recurrent neural network (RNN)—in the context of dynamic load identification. We focus on a double-clamped beam as the test case for dynamic load identification, assessing the trustworthiness of these ANN models through analyses of their generalization ability, causality, and robustness. To this end, we propose two novel interpretation algorithms, one operating in the frequency domain and the other in the time domain, to elucidate the causality and robustness of the ANN models, respectively. A systematic framework for evaluating the interpretability of ANN models in dynamic load identification is established, and a comprehensive interpretability analysis of the three models is conducted. Our findings reveal that the three ANN models exhibit varying levels of trustworthiness, with the RNN model demonstrating the highest degree of interpretability. This conclusion is further supported by the interpretability study conducted on mathematical models.
KW - Artificial neural network
KW - Causality
KW - Dynamic load identification
KW - Generalization ability
KW - Interpretability
KW - Robustness
UR - https://www.scopus.com/pages/publications/86000514365
U2 - 10.1016/j.measurement.2025.117210
DO - 10.1016/j.measurement.2025.117210
M3 - 文章
AN - SCOPUS:86000514365
SN - 0263-2241
VL - 251
JO - Measurement: Journal of the International Measurement Confederation
JF - Measurement: Journal of the International Measurement Confederation
M1 - 117210
ER -