TY - GEN
T1 - Cognitive map-based model
T2 - 20th IEEE International Conference on Intelligent Transportation Systems, ITSC 2017
AU - Chen, Shitao
AU - Shang, Jinghao
AU - Zhang, Songyi
AU - Zheng, Nanning
N1 - Publisher Copyright:
© 2017 IEEE.
PY - 2017/7/2
Y1 - 2017/7/2
N2 - End-to-end learning and multi-sensor fusion-based methods are two major frameworks used for self-driving cars. To enable these intelligence vehicles to acquire driving skills at a level comparable to that of human drivers, long short-term memory of previous self-driving processes is necessary, but is difficult to introduce into the above-mentioned frameworks. In this paper, we propose a model for self-driving cars called the cognitive map-based neural network (CMNN). Our framework consists of three parts: a convolutional neural network that can perceive the environment in the manner that the human visual cortex does, a cognitive map to describe the locations of objects in a complex traffic scene and the relationships among them, and a recurrent neural network to process long short-term memory from the cognitive map, which is updated in real time. The proposed model is built to simultaneously handle three tasks: i) detecting free space and lane boundaries, ii) estimating vehicle pose and obstacle distance, and iii) learning to plan and control based on the behaviors of a human driver. More significantly, our approach introduces external instructions during an end-to-end driving process. To test it, we created a large-scale road vehicle dataset (RVD) containing more than 50,000 labeled road images captured by three cameras. We implemented the proposed model on an embedded system.
AB - End-to-end learning and multi-sensor fusion-based methods are two major frameworks used for self-driving cars. To enable these intelligence vehicles to acquire driving skills at a level comparable to that of human drivers, long short-term memory of previous self-driving processes is necessary, but is difficult to introduce into the above-mentioned frameworks. In this paper, we propose a model for self-driving cars called the cognitive map-based neural network (CMNN). Our framework consists of three parts: a convolutional neural network that can perceive the environment in the manner that the human visual cortex does, a cognitive map to describe the locations of objects in a complex traffic scene and the relationships among them, and a recurrent neural network to process long short-term memory from the cognitive map, which is updated in real time. The proposed model is built to simultaneously handle three tasks: i) detecting free space and lane boundaries, ii) estimating vehicle pose and obstacle distance, and iii) learning to plan and control based on the behaviors of a human driver. More significantly, our approach introduces external instructions during an end-to-end driving process. To test it, we created a large-scale road vehicle dataset (RVD) containing more than 50,000 labeled road images captured by three cameras. We implemented the proposed model on an embedded system.
UR - https://www.scopus.com/pages/publications/85046245822
U2 - 10.1109/ITSC.2017.8317627
DO - 10.1109/ITSC.2017.8317627
M3 - 会议稿件
AN - SCOPUS:85046245822
T3 - IEEE Conference on Intelligent Transportation Systems, Proceedings, ITSC
SP - 1
EP - 8
BT - 2017 IEEE 20th International Conference on Intelligent Transportation Systems, ITSC 2017
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 16 October 2017 through 19 October 2017
ER -