3D-to-2D Distillation for Indoor Scene Parsing

Zhengzhe LIU, Xiaojuan QI, Chi-Wing FU

Research output: Book Chapters | Papers in Conference ProceedingsConference paper (refereed)Researchpeer-review

42 Citations (Scopus)

Abstract

Indoor scene semantic parsing from RGB images is very challenging due to occlusions, object distortion, and viewpoint variations. Going beyond prior works that leverage geometry information, typically paired depth maps, we present a new approach, a 3D-to-2D distillation framework, that enables us to leverage 3D features extracted from large-scale 3D data repositories (e.g., ScanNet-v2) to enhance 2D features extracted from RGB images. Our work has three novel contributions. First, we distill 3D knowledge from a pretrained 3D network to supervise a 2D network to learn simulated 3D features from 2D features during the training, so the 2D network can infer without requiring 3D data. Second, we design a two-stage dimension normalization scheme to calibrate the 2D and 3D features for better integration. Third, we design a semantic-aware adversarial training model to extend our framework for training with unpaired 3D data. Extensive experiments on various datasets, ScanNet-V2, S3DIS, and NYU-v2, demonstrate the superiority of our approach. Also, experimental results show that our 3D-to-2D distillation improves the model generalization.
Original languageEnglish
Title of host publicationProceedings : 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2021
PublisherIEEE Computer Society
Pages4462-4472
Number of pages11
ISBN (Electronic)9781665445092
DOIs
Publication statusPublished - 2021
Externally publishedYes

Bibliographical note

Publisher Copyright:
© 2021 IEEE

Fingerprint

Dive into the research topics of '3D-to-2D Distillation for Indoor Scene Parsing'. Together they form a unique fingerprint.

Cite this