Abstract
Dynamic material handling (DMH) involves the assignment of dynamically arriving material transporting tasks to suitable vehicles in real time for minimizing makespan and tardiness. In real-world scenarios, historical task records are usually available, which enables the training of a decision policy on multiple instances consisting of historical records. Recently, reinforcement learning (RL) has been applied to solve DMH. Due to the occurrence of dynamic events such as new tasks, adaptability is highly required. Solving DMH is challenging since constraints, including task delay, should be satisfied. A feedback is received only when all tasks are served, which leads to sparse reward. Besides, making the best use of limited computational resources and historical records for training a robust policy is crucial. The time allocated to different problem instances would highly impact the learning process. To tackle those challenges, this article proposes a novel adaptive constrained evolutionary RL (ACERL) approach, which maintains a population of actors for diverse exploration. ACERL accesses each actor for tackling sparse rewards and constraint violation to restrict the behavior of the policy. Moreover, ACERL adaptively selects the most beneficial training instances for improving the policy. Extensive experiments on eight training and eight unseen test instances demonstrate the outstanding performance of ACERL compared with several state-of-the-art algorithms. Policies trained by ACERL can schedule the vehicles while fully satisfying the constraints. Additional experiments on 40 unseen noised instances show the robust performance of ACERL. Cross validation further presents the overall effectiveness of ACREL. Besides, a rigorous ablation study highlights the coordination and benefits of each ingredient of ACERL.
| Original language | English |
|---|---|
| Pages (from-to) | 19327-19341 |
| Number of pages | 15 |
| Journal | IEEE Transactions on Neural Networks and Learning Systems |
| Volume | 36 |
| Issue number | 10 |
| Early online date | 8 Jul 2025 |
| DOIs | |
| Publication status | Published - Oct 2025 |
Bibliographical note
Publisher Copyright:© 2025 IEEE.
Funding
This work was supported in part by the National Key Research and Development Program of China under Grant 2023YFE0106300, in part by the National Natural Science Foundation of China under Grant 62250710682 and Grant 62476119, in part by Guangdong Major Project of Basic and Applied Basic Research under Grant 2023B0303000010, and in part by the Internal Grants of Lingnan University
Keywords
- Constrained optimization
- dynamic material handling (DMH)
- evolutionary reinforcement learning (ERL)
- experience-based optimization
- natural evolution strategy (ES)