Abstract
We present Cart, a new approach towards articulatedobject manipulations by human commands. Beyond the existing work that focuses on inferring articulation structures, we further support manipulating articulated shapes to align them subject to simple command templates. The key of Cart is to utilize the prediction of object structures to connect visual observations with user commands for effective manipulations. It is achieved by encoding command messages for motion prediction and a test-time adaptation to adjust the amount of movement from only command supervision. For a rich variety of object categories, Cart can accurately manipulate object shapes and outperform the state-of-the-art approaches in understanding the inherent articulation structures. Also, it can well generalize to unseen object categories and real-world objects. We hope Cart could open new directions for instructing machines to operate articulated objects. Code is available at https://github.com/dvlab-research/Cart.
Original language | English |
---|---|
Title of host publication | Proceedings : 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023 |
Publisher | IEEE Computer Society |
Pages | 8813-8823 |
Number of pages | 11 |
ISBN (Electronic) | 9798350301298 |
DOIs | |
Publication status | Published - 2023 |
Externally published | Yes |
Bibliographical note
Publisher Copyright:© 2023 IEEE.
Funding
The work is supported by the Research Grants Council of the HKSAR, China [CUHK 14201921] and the Shenzhen Science and Technology Program (KQTD20210811090149095).
Keywords
- 3D from multi-view and sensors