Abstract
There is a prevailing trend towards fusing multi-modal information for 3D object detection (3OD). However, challenges related to computational efficiency, plug-and-play capabilities, and accurate feature alignment have not been adequately addressed in the design of multi-modal fusion networks. In this paper, we present PointSee , a lightweight, flexible, and effective multi-modal fusion solution to facilitate various 3OD networks by se mantic feature e nhancement of point clouds (e.g., LiDAR or RGB-D data) assembled with scene images. Beyond the existing wisdom of 3OD, PointSee consists of a hidden module (HM) and a seen module (SM): HM decorates point clouds using 2D image information in an offline fusion manner, leading to minimal or even no adaptations of existing 3OD networks; SM further enriches the point clouds by acquiring point-wise representative semantic features, leading to enhanced performance of existing 3OD networks. Besides the new architecture of PointSee, we propose a simple yet efficient training strategy, to ease the potential inaccurate regressions of 2D object detection networks. Extensive experiments on the popular outdoor/indoor benchmarks show quantitative and qualitative improvements of our PointSee over thirty-five state-of-the-art methods.
Original language | English |
---|---|
Pages (from-to) | 1-18 |
Number of pages | 18 |
Journal | IEEE Transactions on Visualization and Computer Graphics |
DOIs | |
Publication status | E-pub ahead of print - 10 Nov 2023 |
Bibliographical note
Publisher Copyright:IEEE
Keywords
- 3D object detection
- Data augmentation
- Feature extraction
- Object detection
- Point cloud compression
- PointSee
- Proposals
- Semantics
- Three-dimensional displays
- feature enhancement
- multi-modal fusion