Abstract
In this work, we focus on synthesizing high-quality textures on 3D meshes. We present Point-UV diffusion, a coarse-to-fine pipeline that marries the denoising diffusion model with UV mapping to generate 3D consistent and high-quality texture images in UV space. We start with introducing a point diffusion model to synthesize low-frequency texture components with our tailored style guidance to tackle the biased color distribution. The derived coarse texture offers global consistency and serves as a condition for the subsequent UV diffusion stage, aiding in regularizing the model to generate a 3D consistent UV texture image. Then, a UV diffusion model with hybrid conditions is developed to enhance the texture fidelity in the 2D UV space. Our method can process meshes of any genus, generating diversified, geometry-compatible, and high-fidelity textures. Code is available at https://cvmi-lab.github.io/Point-UV-Diffusion.
Original language | English |
---|---|
Title of host publication | Proceedings : 2023 IEEE/CVF International Conference on Computer Vision, ICCV 2023 |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 4183-4193 |
Number of pages | 11 |
ISBN (Electronic) | 9798350307184 |
DOIs | |
Publication status | Published - 2023 |
Externally published | Yes |
Bibliographical note
Publisher Copyright:© 2023 IEEE.
Funding
This work has been supported by Hong Kong Research Grant Council - Early Career Scheme (Grant No. 27209621), General Research Fund Scheme (Grant No. 17202422), and RGC matching fund scheme (RMGS). Part of the described research work is conducted in the JC STEM Lab of Robotics for Soft Materials funded by The Hong Kong Jockey Club Charities Trust.