DiffHuman: Probabilistic Photorealistic 3D Reconstruction of Humans

CVPR 2024

1Google Research, 2University of Cambridge

Abstract

We present DiffHuman, a probabilistic method for photorealistic 3D human reconstruction from a single RGB image. Despite the ill-posed nature of this problem, most methods are deterministic and output a single solution, often resulting in a lack of geometric detail and blurriness in unseen or uncertain regions. In contrast, DiffHuman predicts a probability distribution over 3D reconstructions conditioned on an input 2D image, which allows us to sample multiple detailed 3D avatars that are consistent with the image. DiffHuman is implemented as a conditional diffusion model that denoises pixel-aligned 2D observations of an underlying 3D shape representation. During inference, we may sample 3D avatars by iteratively denoising 2D renders of the predicted 3D representation. Furthermore, we introduce a generator neural network that approximates rendering with considerably reduced runtime (55x speed up), resulting in a novel dual-branch diffusion framework. Our experiments show that DiffHuman can produce diverse and detailed reconstructions for the parts of the person that are unseen or uncertain in the input image, while remaining competitive with the state-of-the-art when reconstructing visible surfaces.

Video Overview

Method

Results

Comparison against Deterministic Methods



Denoising Trajectory and Diversity Visualisation


Unconditional and Edge-Conditioned Samples

BibTeX


        @inproceedings{
          sengupta2024diffhuman,
          author = {Sengupta, Akash and Alldieck, Thiemo and Kolotouros, Nikos and Corona, Enric and Zanfir, Andrei and Sminchisescu, Cristian},
          title = {{DiffHuman: Probabilistic Photorealistic 3D Reconstruction of Humans}},
          booktitle = {CVPR},
          month = {June},
          year = {2024}
        }