Abstract
LiDAR-produced point clouds are the major source for most state-of-the-art 3D object detectors. Yet, small, distant, and incomplete objects with sparse or few points are often hard to detect. We present Sparse2Dense, a new framework to efficiently boost 3D detection performance by learning to densify point clouds in latent space. Specifically, we first train a dense point 3D detector (DDet) with a dense point cloud as input and design a sparse point 3D detector (SDet) with a regular point cloud as input. Importantly, we formulate the lightweight plug-in S2D module and the point cloud reconstruction module in SDet to densify 3D features and train SDet to produce 3D features, following the dense 3D features in DDet. So, in inference, SDet can simulate dense 3D features from regular (sparse) point cloud inputs without requiring dense inputs. We evaluate our method on the large-scale Waymo Open Dataset and the Waymo Domain Adaptation Dataset, showing its high performance and efficiency over the state of the arts. The code is available at https://github.com/stevewongv/Sparse2Dense.
Original language | English |
---|---|
Title of host publication | 36th Conference on Neural Information Processing Systems, NeurIPS 2022 : proceedings |
Editors | S. KOYEJO, S. MOHAMED, A. AGARWAL, D. BELGRAVE, K. CHO, A. OH |
Publisher | Neural Information Processing Systems Foundation |
ISBN (Electronic) | 9781713871088 |
Publication status | Published - 2022 |
Externally published | Yes |
Publication series
Name | Advances in Neural Information Processing Systems |
---|---|
Volume | 35 |
Bibliographical note
Publisher Copyright:© 2022 Neural information processing systems foundation. All rights reserved.
Funding
This work was supported by the project #MMT-p2-21 of the Shun Hing Institute of Advanced Engineering, The Chinese University of Hong Kong, and the Shanghai Committee of Science and Technology (Grant No.21DZ1100100).