Abstract
Flocking control, as an essential approach for survivable navigation of multirobot systems, has been widely applied in fields, such as logistics, service delivery, and search and rescue. However, realistic environments are typically complex, dynamic, and even aggressive, posing considerable threats to the safety of flocking robots. In this article, based on deep reinforcement learning, an Asymmetric Self-play-empowered Flocking Control framework is proposed to address this concern. Specifically, the flocking robots are trained concurrently with learnable adversarial interferers to stimulate the intelligence of the flocking strategy. A two-stage self-play training paradigm is developed to improve the robustness and generalization of the model. Furthermore, an auxiliary training module regarding the learning of transition dynamics is designed, dramatically enhancing the adaptability to environmental uncertainties. Feature-level and agent-level attention are implemented for action and value generation, respectively. Both extensive comparative experiments and real-world deployment demonstrate the superiority and practicality of the proposed framework.
Original language | English |
---|---|
Number of pages | 10 |
Journal | IEEE Transactions on Industrial Informatics |
Early online date | 23 Jan 2025 |
DOIs | |
Publication status | E-pub ahead of print - 23 Jan 2025 |
Bibliographical note
Publisher Copyright:© 2005-2012 IEEE.
Keywords
- Adversarial training
- autonomous vehicles
- flocking
- multiagent deep reinforcement learning (MADRL)