Abstract
Achieving fully autonomous driving in urban traffic scenarios is a significant challenge that necessitates balancing safety, efficiency, and compliance with traffic regulations. In this letter, we introduce a novel Curriculum Residual Hierarchical Reinforcement Learning (CR-HRL) framework. It integrates a rule-based planning model as a guiding mechanism, while a deep reinforcement learning algorithm generates supplementary residual strategies. This combination enables the RL agent to perform safe and efficient overtaking in complex traffic scenarios. Furthermore, we implement a detailed three-stage curriculum learning strategy that enhances the training process. By progressively increasing task complexity, the curriculum strategy effectively guides the exploration of autonomous vehicles and improves the reusability of sub-strategies. The effectiveness of the CR-HRL framework is confirmed through ablation experiments. Comparative experiments further highlight the superior efficiency and decision-making capabilities of our framework over traditional rule-based and RL baseline methods. Tests conducted with actual vehicles also demonstrate its practical applicability in real-world settings.
| Original language | English |
|---|---|
| Pages (from-to) | 9454-9461 |
| Number of pages | 8 |
| Journal | IEEE Robotics and Automation Letters |
| Volume | 9 |
| Issue number | 11 |
| DOIs | |
| State | Published - 2024 |
Keywords
- Curriculum learning
- autonomous driving
- deep reinforcement learning
- overtaking
- residual policy
Fingerprint
Dive into the research topics of 'Task-Driven Autonomous Driving: Balanced Strategies Integrating Curriculum Reinforcement Learning and Residual Policy'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver