Publication Type
Conference Proceeding Article
Version
acceptedVersion
Publication Date
6-2024
Abstract
Score distillation sampling (SDS) and its variants have greatly boosted the development of text-to-3D generation, but are vulnerable to geometry collapse and poor textures yet. To solve this issue, we first deeply analyze the SDS and find that its distillation sampling process indeed corresponds to the trajectory sampling of a stochastic differential equation (SDE): SDS samples along an SDE trajectory to yield a less noisy sample which then serves as a guidance to optimize a 3D model. However, the randomness in SDE sampling often leads to a diverse and unpredictable sample which is not always less noisy, and thus is not a consistently correct guidance, explaining the vulnerability of SDS. Since for any SDE, there always exists an ordinary differential equation (ODE) whose trajectory sampling can deterministically and consistently converge to the desired target point as the SDE, we propose a novel and effective “Consistent3D” method that explores the ODE deterministic sampling prior for text-to-3D generation. Specifically, at each training iteration, given a rendered image by a 3D model, we first estimate its desired 3D score function by a pre-trained 2D diffusion model, and build an ODE for trajectory sampling. Next, we design a consistency distillation sampling loss which samples along the ODE trajectory to generate two adjacent samples and uses the less noisy sample to guide another more noisy one for distilling the deterministic prior into the 3D model. Experimental results show the efficacy of our Consistent3D in generating high-fidelity and diverse 3D objects and large-scale scenes, as shown in Fig. 1. The codes are available at https: //github.com/sail-sg/Consistent3D.
Discipline
Graphics and Human Computer Interfaces
Research Areas
Intelligent Systems and Optimization
Areas of Excellence
Digital transformation
Publication
Proceedings of the 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, June 17-21
First Page
1
Last Page
16
Publisher
IEEE
City or Country
Los Alamitos, CA
Citation
WU, Zike; ZHOU, Pan; YI, Xuanyu; YUAN, Xiaoding; and ZHANG, Hanwang.
Consistent3D: Towards consistent high-fidelity text-to-3D generation with deterministic sampling prior. (2024). Proceedings of the 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, June 17-21. 1-16.
Available at: https://ink.library.smu.edu.sg/sis_research/9016
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
https://openaccess.thecvf.com/content/CVPR2024/papers/Wu_Consistent3D_Towards_Consistent_High-Fidelity_Text-to-3D_Generation_with_Deterministic_Sampling_Prior_CVPR_2024_paper.pdf