TY - GEN
T1 - Enhancing Scene Simulation for Autonomous Driving with Neural Point Rendering
AU - Yang, Junqing
AU - Yan, Yuxi
AU - Chen, Shitao
AU - Zheng, Nanning
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Simulation plays a critical role in the development and testing of autonomous driving, which encounters significant challenges when synthesizing complex driving scenarios and realistic sensor information. Existing scene simulation methods either fail to capture intricate physical characteristics of the 3D world or struggle to extend to autonomous driving datasets with uneven distribution of viewpoints. This paper proposes a point-based neural rendering approach to reconstruct and extend scenes, thereby generating real-world test data for autonomous driving systems from various views. By utilizing collected LiDAR data and filling in sparse regions in the point cloud, accurate depth and position references are provided. Additionally, the neural descriptor is enhanced by incorporating supplementary features relying on the observation views and sampling frequency, while rendering multi-scale descriptions to capture comprehensive information about the scene's appearance. Experimental results demonstrate that our method achieves high-quality rendering for large-scale autonomous driving scenes and enables scene editing to synthesize more diverse and adaptable testing scenes.
AB - Simulation plays a critical role in the development and testing of autonomous driving, which encounters significant challenges when synthesizing complex driving scenarios and realistic sensor information. Existing scene simulation methods either fail to capture intricate physical characteristics of the 3D world or struggle to extend to autonomous driving datasets with uneven distribution of viewpoints. This paper proposes a point-based neural rendering approach to reconstruct and extend scenes, thereby generating real-world test data for autonomous driving systems from various views. By utilizing collected LiDAR data and filling in sparse regions in the point cloud, accurate depth and position references are provided. Additionally, the neural descriptor is enhanced by incorporating supplementary features relying on the observation views and sampling frequency, while rendering multi-scale descriptions to capture comprehensive information about the scene's appearance. Experimental results demonstrate that our method achieves high-quality rendering for large-scale autonomous driving scenes and enables scene editing to synthesize more diverse and adaptable testing scenes.
UR - https://www.scopus.com/pages/publications/85186497687
U2 - 10.1109/ITSC57777.2023.10422354
DO - 10.1109/ITSC57777.2023.10422354
M3 - 会议稿件
AN - SCOPUS:85186497687
T3 - IEEE Conference on Intelligent Transportation Systems, Proceedings, ITSC
SP - 4100
EP - 4107
BT - 2023 IEEE 26th International Conference on Intelligent Transportation Systems, ITSC 2023
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 26th IEEE International Conference on Intelligent Transportation Systems, ITSC 2023
Y2 - 24 September 2023 through 28 September 2023
ER -