Using External Attention in Vision-based Autonomous Driving

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Imitation learning (IL) method provides a concise framework for autonomous driving, and it learns the policy from human demonstration via mapping from sensor data to vehicle controls. However, how to achieve effective learning is an ongoing challenge in driving policy learning. A feasible solution is to introduce attention mechanism, which enables the deep model focus on the features related to the driving task. In this paper, we propose to train a driving policy model with latest external attention joining the policy network as blocks. The experimental results on CARLA driving benchmark demonstrate our external attention guided driving policy model has better performance and costs less training time than the model without attention mechanism. Moreover, our method outperforms the state-of-the-arts self-attention driving policy methods.

Original languageEnglish
Title of host publicationProceeding - 2021 China Automation Congress, CAC 2021
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages6578-6582
Number of pages5
ISBN (Electronic)9781665426473
DOIs
StatePublished - 2021
Event2021 China Automation Congress, CAC 2021 - Beijing, China
Duration: 22 Oct 202124 Oct 2021

Publication series

NameProceeding - 2021 China Automation Congress, CAC 2021

Conference

Conference2021 China Automation Congress, CAC 2021
Country/TerritoryChina
CityBeijing
Period22/10/2124/10/21

Keywords

  • attention mechanism
  • autonomous driving
  • driving policy
  • imitation learning

Fingerprint

Dive into the research topics of 'Using External Attention in Vision-based Autonomous Driving'. Together they form a unique fingerprint.

Cite this