Abstract
Compared with optical images, polarimetric synthetic aperture radar (PolSAR) images have richer feature information. However, traditional convolutional neural networks (CNNs) tend to extract more redundant information when processing features from PolSAR data. This results in weak network performance and difficulty in applying to actual scenarios. To solve this problem, this paper proposes a new lightweight neural network, Ghost-Inception and coordinate attention network (GICANet), for the classification of PolSAR images. First, in view of the complex scattering mechanism of PolSAR images, GICANet uses ghost convolution instead of standard convolution, and builds a Ghost-Inception module to achieve multi-scale feature extraction and reduce redundant information extraction. Second, GICANet designs a new mean-variance coordinated coordinate attention mechanism, which enhances the network's perception of spatial information and local pixel positions to make it more sensitive to the local texture information of PolSAR data. Finally, GICANet uses attention feature enhancement (AFE) to fuse the shallow features of PloSAR data with deep features. And enhanced in the attention module to capture pixel-level information of images more effectively. Compared with traditional CNN, GICANet is more lightweight, with network parameters and calculation volume reduced by 87.72 % and 74.20 % respectively. Experimental results with six state-of-the-art algorithms on four PolSAR image classification datasets show that GICANet achieves better experimental results.
| Original language | English |
|---|---|
| Article number | 112676 |
| Journal | Applied Soft Computing Journal |
| Volume | 170 |
| DOIs | |
| State | Published - Feb 2025 |
| Externally published | Yes |
Keywords
- Attention feature enhancement
- Coordinate attention mechanism
- Ghost-Inception
- Lightweight neural network
- PolSAR image classification
Fingerprint
Dive into the research topics of 'A lightweight PolSAR image classification algorithm based on multi-scale feature extraction and local spatial information perception'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver