Abstract
Multi-robot cooperative navigation is an important task, which has been widely studied in many fields like logistics, transportation, and disaster rescue. However, most of the existing methods either require some strong assumptions or are validated in simple scenarios, which greatly hinders their implementation in the real world. In this paper, more complex environments are considered in which robots can only acquire local observations from their own sensors and have only limited communication capabilities for mapless collaborative navigation. To address this challenging task, we propose a hierarchical framework, by fusing both S ensor-wise and A gent-wise features for P erception- I mproving (SAPI), which can adaptively integrate features from different information sources to improve perception capabilities. Specifically, to facilitate scene understanding, we assign prior knowledge to the visual coder to generate efficient embeddings. For effective feature representation, an attention-based sensor fusion network is designed to fuse sensor-level information of visual and LiDAR sensors, while graph convolution with multi-head attention mechanism is applied to aggregate agent-level information from an arbitrary number of neighbors. In addition, reinforcement learning is used to optimize the policy, where a novel compound reward function is introduced to guide training. Extensive experiments demonstrate that our method has excellent generalization ability in different scenarios and scalability for large-scale systems.
Original language | English |
---|---|
Number of pages | 15 |
Journal | IEEE Transactions on Intelligent Transportation Systems |
DOIs | |
Publication status | E-pub ahead of print - 2 Jan 2024 |
Bibliographical note
Publisher Copyright:IEEE
Keywords
- Collision avoidance
- Multi-robot systems
- Navigation
- Planning
- Robot kinematics
- Robot sensing systems
- Task analysis
- Visualization
- collision avoidance
- deep reinforcement learning
- feature fusion