Abstract
Representation learning for two partially overlapping point clouds remains an open challenge in unsupervised point cloud registration (U-PCR). In this article, we introduce RegiFormer, a geometric local-to-global transformer (GLGT)-based unsupervised framework equipped with a self-augmentation (SA) strategy, for point cloud registration. The GLGT not only aggregates features from local neighborhoods but also extracts global intrarelationships within the entire point cloud using a transformation-invariant geometry embedding. In addition, it enhances the interrelationships between paired point clouds. To overcome the limited ability of U-PCR methods to learn alignment knowledge, we design an SA strategy that can be flexibly integrated into advanced models, significantly boosting their registration performance. Extensive experiments, conducted on five popular synthetic and real-scanned benchmarks, demonstrate the superior performance of RegiFormer compared to state-of-the-art methods, both qualitatively and quantitatively.
| Original language | English |
|---|---|
| Article number | 5705313 |
| Journal | IEEE Transactions on Geoscience and Remote Sensing |
| Volume | 62 |
| Early online date | 29 Jul 2024 |
| DOIs | |
| Publication status | Published - 2024 |
| Externally published | Yes |
Bibliographical note
Publisher Copyright:© 1980-2012 IEEE.
Funding
This work was supported by a grant from the Research Grants Council of the Hong Kong Special Administrative Region, China (No. UGC/FDS16/E14/21), the National Natural Science Foundation of China (No. T2322012, No. 62172218, No. 62032011), the Shenzhen Science and Technology Program (No. JCYJ20220818103401003, No. JCYJ20220530172403007), and the Guangdong Basic and Applied Basic Research Foundation (No. 2022A1515010170).
Keywords
- Geometric local-to-global transformer (GLGT)
- point cloud registration
- RegiFormer
- self-augmentation (SA)
Fingerprint
Dive into the research topics of 'RegiFormer: Unsupervised Point Cloud Registration via Geometric Local-to-Global Transformer and Self-Augmentation'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver