Skip to content
Massachusetts Institute of Technology
  • on: May 1, 2024
  • in: NeurIPS

Score Distillation via Reparametrized DDIM

While 2D diffusion models generate realistic, high-detail images, 3D shape generation methods like Score Distillation Sampling (SDS) built on these 2D diffusion models produce cartoon-like, over-smoothed shapes. To help explain this discrepancy, we show that the image guidance used in Score Distillation can be understood as the velocity field of a 2D denoising generative process, up to the choice of a noise term. In particular, after a change of variables, SDS resembles a high-variance version of Denoising Diffusion Implicit Models (DDIM) with a differently-sampled noise term: SDS introduces noise i.i.d. randomly at each step, while DDIM infers it from the previous noise predictions. This excessive variance can lead to over-smoothing and unrealistic outputs. We show that a better noise approximation can be recovered by inverting DDIM in each SDS update step. This modification makes SDS’s generative process for 2D images almost identical to DDIM. In 3D, it removes over-smoothing, preserves higher-frequency detail, and brings the generation quality closer to that of 2D samplers. Experimentally, our method achieves better or similar 3D generation quality compared to other state-of-the-art Score Distillation methods, all without training additional neural networks or multi-view supervision, and providing useful insights into relationship between 2D and 3D asset generation with diffusion models.

Citation

@inproceedings{lukoianov2024score,
    title = { Score Distillation via Reparametrized DDIM },
    author = { Lukoianov, Artem and 
               de Ocáriz Borde, Haitz Sáez and 
               Greenewald, Kristjan and 
               Guizilini, Vitor Campagnolo and 
               Bagautdinov, Timur and 
               Sitzmann, Vincent and 
               Solomon, Justin },
    year = { 2024 },
    booktitle = { NeurIPS },
}
  • Copy to Clipboard
  • Download