MRTP:时间-动作感知的多尺度时间序列实时行为识别方法

Translated title of the contribution: MRTP:Multi-Temporal Resolution Real-Time Action Recognition Approach by Time-Action Perception

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

In view of the uneven distribution of temporal and spatial information and the difficulty in obtaining long-term information representation, we propose a dual-path action recognition method MRTP based on time and motion perception, which takes RGB video as input and applies two parallel perception paths to extract spatial and motion features from the video at different time resolutions. In the spatial path, the action-perception based on feature difference is applied to find and strengthen the action feature representation; in the action path, the channel is filtered based on the weight of action perception, and channel attention and time attention are added to enhance the key features; the characteristics of the two paths are merged to calculate the action category score of the video. The experimental results show that MRTP achieves accuracy of 95.6% on the UCF101 data set to outperform the model without time attention; on the AVA2.2 data set, the accuracy of mAP reaches 28% to outperform the model without action perception and time attention. Compared with the current mainstream two-stream network, 3D convolution, Transformer, and other methods on a number of accuracy indicators, this method is endowed with better recognition effect and robustness.

Translated title of the contributionMRTP:Multi-Temporal Resolution Real-Time Action Recognition Approach by Time-Action Perception
Original languageChinese (Traditional)
Pages (from-to)22-32
Number of pages11
JournalHsi-An Chiao Tung Ta Hsueh/Journal of Xi'an Jiaotong University
Volume56
Issue number3
DOIs
StatePublished - 10 Mar 2022

Fingerprint

Dive into the research topics of 'MRTP:Multi-Temporal Resolution Real-Time Action Recognition Approach by Time-Action Perception'. Together they form a unique fingerprint.

Cite this