MosViT: towards vision transformers for moving object segmentation based on Lidar point cloud

  • Chunyun Ma
  • , Xiaojun Shi
  • , Yingxin Wang
  • , Shuai Song
  • , Zhen Pan
  • , Jiaxiang Hu

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

Moving object segmentation is fundamental for various downstream tasks in robotics and autonomous driving, providing crucial information for them. Effectively extracting spatial-temporal information from consecutive frames and addressing the scarcity of dataset is important for learning-based 3D LiDAR moving object segmentation (LIDAR-MOS). In this work, we propose a novel deep neural network based on vision transformers (ViTs) to tackle this problem. We first validate the feasibility of transformer networks for this task, offering an alternative to CNNs. Specifically, we utilize a dual-branch structure using range (residual) image as input to extract spatial-temporal information from consecutive frames and fuse it using a motion-guided attention mechanism. Furthermore, we employ the ViT as the backbone, keeping its architecture unchanged from what is used for RGB images. This enables us to leverage pre-trained models on RGB images to improve results, addressing the issue of limited LiDAR point cloud data, which is cheaper compared to acquiring and annotating point cloud data. We validate the effectiveness of our approach on the LIDAR-MOS benchmark of SemanticKitti and achieve comparable results to methods that use CNNs on range image data. The source code and trained models will be available at https://github.com/mafangniu/MOSViT.git.

Original languageEnglish
Article number116302
JournalMeasurement Science and Technology
Volume35
Issue number11
DOIs
StatePublished - Nov 2024

Keywords

  • LiDAR moving object segmentation (LiDAR-MOS)
  • ViT
  • pre-trained models

Fingerprint

Dive into the research topics of 'MosViT: towards vision transformers for moving object segmentation based on Lidar point cloud'. Together they form a unique fingerprint.

Cite this