TY - JOUR
T1 - MosViT
T2 - towards vision transformers for moving object segmentation based on Lidar point cloud
AU - Ma, Chunyun
AU - Shi, Xiaojun
AU - Wang, Yingxin
AU - Song, Shuai
AU - Pan, Zhen
AU - Hu, Jiaxiang
N1 - Publisher Copyright:
© 2024 IOP Publishing Ltd.
PY - 2024/11
Y1 - 2024/11
N2 - Moving object segmentation is fundamental for various downstream tasks in robotics and autonomous driving, providing crucial information for them. Effectively extracting spatial-temporal information from consecutive frames and addressing the scarcity of dataset is important for learning-based 3D LiDAR moving object segmentation (LIDAR-MOS). In this work, we propose a novel deep neural network based on vision transformers (ViTs) to tackle this problem. We first validate the feasibility of transformer networks for this task, offering an alternative to CNNs. Specifically, we utilize a dual-branch structure using range (residual) image as input to extract spatial-temporal information from consecutive frames and fuse it using a motion-guided attention mechanism. Furthermore, we employ the ViT as the backbone, keeping its architecture unchanged from what is used for RGB images. This enables us to leverage pre-trained models on RGB images to improve results, addressing the issue of limited LiDAR point cloud data, which is cheaper compared to acquiring and annotating point cloud data. We validate the effectiveness of our approach on the LIDAR-MOS benchmark of SemanticKitti and achieve comparable results to methods that use CNNs on range image data. The source code and trained models will be available at https://github.com/mafangniu/MOSViT.git.
AB - Moving object segmentation is fundamental for various downstream tasks in robotics and autonomous driving, providing crucial information for them. Effectively extracting spatial-temporal information from consecutive frames and addressing the scarcity of dataset is important for learning-based 3D LiDAR moving object segmentation (LIDAR-MOS). In this work, we propose a novel deep neural network based on vision transformers (ViTs) to tackle this problem. We first validate the feasibility of transformer networks for this task, offering an alternative to CNNs. Specifically, we utilize a dual-branch structure using range (residual) image as input to extract spatial-temporal information from consecutive frames and fuse it using a motion-guided attention mechanism. Furthermore, we employ the ViT as the backbone, keeping its architecture unchanged from what is used for RGB images. This enables us to leverage pre-trained models on RGB images to improve results, addressing the issue of limited LiDAR point cloud data, which is cheaper compared to acquiring and annotating point cloud data. We validate the effectiveness of our approach on the LIDAR-MOS benchmark of SemanticKitti and achieve comparable results to methods that use CNNs on range image data. The source code and trained models will be available at https://github.com/mafangniu/MOSViT.git.
KW - LiDAR moving object segmentation (LiDAR-MOS)
KW - ViT
KW - pre-trained models
UR - https://www.scopus.com/pages/publications/85200583989
U2 - 10.1088/1361-6501/ad6626
DO - 10.1088/1361-6501/ad6626
M3 - 文章
AN - SCOPUS:85200583989
SN - 0957-0233
VL - 35
JO - Measurement Science and Technology
JF - Measurement Science and Technology
IS - 11
M1 - 116302
ER -