TY - GEN
T1 - Searching from Superior to Inferior
T2 - 43rd Chinese Control Conference, CCC 2024
AU - Lu, Chong
AU - Liu, Meiqin
AU - Luan, Zhirong
AU - He, Yan
AU - Chen, Badong
N1 - Publisher Copyright:
© 2024 Technical Committee on Control Theory, Chinese Association of Automation.
PY - 2024
Y1 - 2024
N2 - Target-driven visual navigation is a challenging task for home service robots, where the robots lack prior information about the environment map and obtain environment information solely through the built-in camera. Home scenarios are characterized by complex spatial layouts, diverse room types, and concealed target locations. Due to the limited information available from the robot's built-in camera, navigating to unseen objects becomes more challenging. In this paper, we propose a superior-inferior relationship visual navigation model (SIR) to learn the dependency relationship among objects. By proposing a new reward function, SIR provides spatial relationship information among objects. In this way, the visible object can be regarded as a reference for navigating to unseen objects. Additionally, our method to some extent solves the problem of sparse rewards in navigation tasks. The robot not only receives positive rewards when it successfully finds the target, but also receives partial rewards when it detects superior objects. It also enhances the interpretability of the visual navigation deep reinforcement learning network. Experimental results in the AI2Thor environment demonstrate that our SIR achieve a 6.5% gain on success rate and gain 13.5% improvements on long episodes environment than the baseline method.
AB - Target-driven visual navigation is a challenging task for home service robots, where the robots lack prior information about the environment map and obtain environment information solely through the built-in camera. Home scenarios are characterized by complex spatial layouts, diverse room types, and concealed target locations. Due to the limited information available from the robot's built-in camera, navigating to unseen objects becomes more challenging. In this paper, we propose a superior-inferior relationship visual navigation model (SIR) to learn the dependency relationship among objects. By proposing a new reward function, SIR provides spatial relationship information among objects. In this way, the visible object can be regarded as a reference for navigating to unseen objects. Additionally, our method to some extent solves the problem of sparse rewards in navigation tasks. The robot not only receives positive rewards when it successfully finds the target, but also receives partial rewards when it detects superior objects. It also enhances the interpretability of the visual navigation deep reinforcement learning network. Experimental results in the AI2Thor environment demonstrate that our SIR achieve a 6.5% gain on success rate and gain 13.5% improvements on long episodes environment than the baseline method.
KW - Superior-inferior relationship
KW - deep reinforcement learning
KW - home scenarios
KW - spatial information representation
KW - target-driven visual navigation
UR - https://www.scopus.com/pages/publications/85205461780
U2 - 10.23919/CCC63176.2024.10662369
DO - 10.23919/CCC63176.2024.10662369
M3 - 会议稿件
AN - SCOPUS:85205461780
T3 - Chinese Control Conference, CCC
SP - 7727
EP - 7732
BT - Proceedings of the 43rd Chinese Control Conference, CCC 2024
A2 - Na, Jing
A2 - Sun, Jian
PB - IEEE Computer Society
Y2 - 28 July 2024 through 31 July 2024
ER -