Abstract
Flocking control, as an essential approach for survivable navigation of multirobot systems, has been widely applied in fields, such as logistics, service delivery, and search and rescue. However, realistic environments are typically complex, dynamic, and even aggressive, posing considerable threats to the safety of flocking robots. In this article, based on deep reinforcement learning, an Asymmetric Self-play-empowered Flocking Control framework is proposed to address this concern. Specifically, the flocking robots are trained concurrently with learnable adversarial interferers to stimulate the intelligence of the flocking strategy. A two-stage self-play training paradigm is developed to improve the robustness and generalization of the model. Furthermore, an auxiliary training module regarding the learning of transition dynamics is designed, dramatically enhancing the adaptability to environmental uncertainties. Feature-level and agent-level attention are implemented for action and value generation, respectively. Both extensive comparative experiments and real-world deployment demonstrate the superiority and practicality of the proposed framework.
Original language | English |
---|---|
Pages (from-to) | 3266-3275 |
Number of pages | 10 |
Journal | IEEE Transactions on Industrial Informatics |
Volume | 21 |
Issue number | 4 |
Early online date | 23 Jan 2025 |
DOIs | |
Publication status | Published - 2025 |
Bibliographical note
Publisher Copyright:© 2005-2012 IEEE.
Funding
This work was supported in part by the Technological Innovation Guidance Program of Shandong Province under Grant YDZX2024090, in part by the Shandong Natural Science Foundation of Shandong under Grant ZR2024MF031, in part by the National Natural Science Foundation of China under Grant 62373225, and in part by the National Natural Science Joint Foundation of China under Grant U2013204 and Grant U23A20339.
Keywords
- Adversarial training
- autonomous vehicles
- flocking
- multiagent deep reinforcement learning (MADRL)