Group-spectral superposition and position self-attention transformer for hyperspectral image classification

  • Weitong Zhang
  • , Mingwei Hu
  • , Sihan Hou
  • , Ronghua Shang
  • , Jie Feng
  • , Songhua Xu

Research output: Contribution to journalArticlepeer-review

11 Scopus citations

Abstract

At present, the existing Vision Transformer is sensitive to the original spectral data in the hyperspectral semantic segmentation process. Moreover, the weight relationship between the center pixel and the surrounding pixels is not properly handled when using the spatial–spectral feature method. To solve these problems, this paper proposes the Group-Spectral Superposition and Position Self-Attention Transformer (GSPST) for hyperspectral image classification. Firstly, while preserving the original spectral band data, GSPST groups the spectra and superimposes them in the spectral dimension. The superimposed data and the original spectral data are spliced in the channel dimension and then input into the network for learning. Secondly, GSPST processes the original spectral data and the superimposed spectral data in the spatial–spectral features separately using group convolution. This can offset the noise of the low-level features of the adversarial samples in the spectral dimension, thereby reducing its impact on the classification accuracy. Finally, GSPST modified the internal attention mechanism formula of Transformer to avoid the problem of pixels at different positions in hyperspectral images having the same contribution to image classification. A position self-attention mechanism is proposed by introducing a matrix, which can assign different weights to the center pixel and the rest of the pixels to affect its importance for image classification. By introducing position weight information into the self-attention formula to disassemble redundant information in hyperspectral images, thereby improving model robustness. This mechanism can effectively emphasize the role of center pixels in semantic segmentation. Compared with six state-of-the-art algorithms, simulation results demonstrate that GSPST has higher accuracy after adversarial attacks.

Original languageEnglish
Article number125846
JournalExpert Systems with Applications
Volume265
DOIs
StatePublished - 15 Mar 2025
Externally publishedYes

Keywords

  • Adversarial defense
  • Adversarial examples
  • Hyperspectral image classification
  • Transformer

Fingerprint

Dive into the research topics of 'Group-spectral superposition and position self-attention transformer for hyperspectral image classification'. Together they form a unique fingerprint.

Cite this