Unmanned surface vehicle autonomous racing and obstacle avoidance with robust adversarial deep reinforcement learning

Research output: Contribution to journalArticlepeer-review

Abstract

This paper presents an autonomous racing control method for Unmanned Surface Vehicles (USVs) based on robust adversarial deep reinforcement learning (ADRL) algorithm, which leverages the strengths of both deep reinforcement learning and adversarial training. Adversarial obstacles and various tracks are employed to train a policy, so the proposed method can enhance the robustness and generalization of autonomous USV racing while ensuring effective obstacle avoidance. A simulation environment for USV racing was developed to conduct the experiments with Unity3D. The performance of the proposed method in handling diverse track scenarios and obstacle avoidance is demonstrated through simulations. Quantitative results show that ADRL achieves dramatic improvements over baseline DRL methods: collision rates are reduced by 99.85%, task completion rates are improved by 73.75%, and navigation time efficiency is enhanced by approximately 3% while maintaining superior safety performance.

Original languageEnglish
Article number112250
JournalEngineering Applications of Artificial Intelligence
Volume161
DOIs
StatePublished - 12 Dec 2025
Externally publishedYes

Keywords

  • Adversarial deep reinforcement learning
  • Autonomous racing
  • Obstacle avoidance
  • Unmanned surface vehicles

Fingerprint

Dive into the research topics of 'Unmanned surface vehicle autonomous racing and obstacle avoidance with robust adversarial deep reinforcement learning'. Together they form a unique fingerprint.

Cite this