Swin-Transformer-Enabled YOLOv5 with Attention Mechanism for Small Object Detection on Satellite Images

  • Hang Gong
  • , Tingkui Mu
  • , Qiuxia Li
  • , Haishan Dai
  • , Chunlai Li
  • , Zhiping He
  • , Wenjing Wang
  • , Feng Han
  • , Abudusalamu Tuniyazi
  • , Haoyang Li
  • , Xuechan Lang
  • , Zhiyuan Li
  • , Bin Wang

Research output: Contribution to journalArticlepeer-review

181 Scopus citations

Abstract

Object detection has made tremendous progress in natural images over the last decade. However, the results are hardly satisfactory when the natural image object detection algorithm is directly applied to satellite images. This is due to the intrinsic differences in the scale and orientation of objects generated by the bird’s-eye perspective of satellite photographs. Moreover, the background of satellite images is complex and the object area is small; as a result, small objects tend to be missing due to the challenge of feature extraction. Dense objects overlap and occlusion also affects the detection performance. Although the self-attention mechanism was introduced to detect small objects, the computational complexity increased with the image’s resolution. We modified the general one-stage detector YOLOv5 to adapt the satellite images to resolve the above problems. First, new feature fusion layers and a prediction head are added from the shallow layer for small object detection for the first time because it can maximally preserve the feature information. Second, the original convolutional prediction heads are replaced with Swin Transformer Prediction Heads (SPHs) for the first time. SPH represents an advanced self-attention mechanism whose shifted window design can reduce the computational complexity to linearity. Finally, Normalization-based Attention Modules (NAMs) are integrated into YOLOv5 to improve attention performance in a normalized way. The improved YOLOv5 is termed SPH-YOLOv5. It is evaluated on the NWPU-VHR10 dataset and DOTA dataset, which are widely used for satellite image object detection evaluations. Compared with the basal YOLOv5, SPH-YOLOv5 improves the mean Average Precision (mAP) by 0.071 on the DOTA dataset.

Original languageEnglish
Article number2861
JournalRemote Sensing
Volume14
Issue number12
DOIs
StatePublished - 1 Jun 2022

Keywords

  • Swin transformer
  • deep learning
  • object detection
  • satellite images
  • self-attention mechanism

Fingerprint

Dive into the research topics of 'Swin-Transformer-Enabled YOLOv5 with Attention Mechanism for Small Object Detection on Satellite Images'. Together they form a unique fingerprint.

Cite this