Abstract
It is an interesting open problem to enable robots to efficiently and effectively learn long-horizon manipulation skills. Motivated to augment robot learning via more effective exploration, this work develops task-driven reinforcement learning with action primitives (TRAPs), a new manipulation skill learning framework that augments standard reinforcement learning algorithms with formal methods and parameterized action space (PAS). In particular, TRAPs uses linear temporal logic (LTL) to specify complex manipulation skills. LTL progression, a semantics-preserving rewriting operation, is then used to decompose the training task at an abstract level, informs the robot about their current task progress, and guides them via reward functions. The PAS, a predefined library of heterogeneous action primitives, further improves the efficiency of robot exploration. We highlight that TRAPs augments the learning of manipulation skills in both learning efficiency and effectiveness (i.e., task constraints). Extensive empirical studies demonstrate that TRAPs outperforms most existing methods.
| Original language | English |
|---|---|
| Pages (from-to) | 4513-4526 |
| Number of pages | 14 |
| Journal | IEEE Transactions on Cybernetics |
| Volume | 54 |
| Issue number | 8 |
| DOIs | |
| Publication status | Published - Aug 2024 |
| Externally published | Yes |
Bibliographical note
Publisher Copyright:© 2013 IEEE.
Funding
This work was supported in part by the National Key Research and Development Program of China under Grant 2022YFB4701400/4701403, and in part by the National Natural Science Foundation of China under Grant 62173314.
Keywords
- Action primitives
- linear temporal logic (LTL)
- long-horizon manipulation skills
- task-driven RL