Abstract
This paper presents a new approach for 3D shape generation, inversion, and manipulation, through a direct generative modeling on a continuous implicit representation in wavelet domain. Specifically, we propose a compact wavelet representation with a pair of coarse and detail coefficient volumes to implicitly represent 3D shapes via truncated signed distance functions and multi-scale biorthogonal wavelets. Then, we design a pair of neural networks: a diffusion-based generator to produce diverse shapes in the form of the coarse coefficient volumes and a detail predictor to produce compatible detail coefficient volumes for introducing fine structures and details. Further, we may jointly train an encoder network to learn a latent space for inverting shapes, allowing us to enable a rich variety of whole-shape and region-aware shape manipulations. Both quantitative and qualitative experimental results manifest the compelling shape generation, inversion, and manipulation capabilities of our approach over the state-of-the-art methods.
Original language | English |
---|---|
Article number | 16 |
Number of pages | 18 |
Journal | ACM Transactions on Graphics |
Volume | 43 |
Issue number | 2 |
Early online date | 3 Jan 2024 |
DOIs | |
Publication status | Published - Apr 2024 |
Externally published | Yes |
Bibliographical note
Publisher Copyright:© 2024 Copyright held by the owner/author(s). Publication rights licensed to ACM.
Funding
This work is supported by Research Grants Council of the Hong Kong Special Administrative Region (Project no. CUHK 14206320 & 14201921).
Keywords
- diffusion model
- Shape generation
- shape manipulation
- wavelet representation