Study on interpretability of artificial neural network models for dynamic load identification

Research output: Contribution to journalArticlepeer-review

7 Scopus citations

Abstract

In recent years, artificial neural networks (ANNs) have been extensively utilized for dynamic load identification due to their remarkable capability to approximate the complex relationships between external dynamic loads and corresponding responses. While ANN-based methods offer high accuracy in load identification, the over-parameterized black-box nature raises concerns about their reliability, which hampers their broader adoption in engineering applications. This paper addresses the interpretability of three typical ANN models—namely, the radial basis function neural network (RBFNN), the deep convolutional neural network (CNN), and the deep recurrent neural network (RNN)—in the context of dynamic load identification. We focus on a double-clamped beam as the test case for dynamic load identification, assessing the trustworthiness of these ANN models through analyses of their generalization ability, causality, and robustness. To this end, we propose two novel interpretation algorithms, one operating in the frequency domain and the other in the time domain, to elucidate the causality and robustness of the ANN models, respectively. A systematic framework for evaluating the interpretability of ANN models in dynamic load identification is established, and a comprehensive interpretability analysis of the three models is conducted. Our findings reveal that the three ANN models exhibit varying levels of trustworthiness, with the RNN model demonstrating the highest degree of interpretability. This conclusion is further supported by the interpretability study conducted on mathematical models.

Original languageEnglish
Article number117210
JournalMeasurement: Journal of the International Measurement Confederation
Volume251
DOIs
StatePublished - 30 Jun 2025

Keywords

  • Artificial neural network
  • Causality
  • Dynamic load identification
  • Generalization ability
  • Interpretability
  • Robustness

Fingerprint

Dive into the research topics of 'Study on interpretability of artificial neural network models for dynamic load identification'. Together they form a unique fingerprint.

Cite this