paint-brush
HyperHuman: Hyper-Realistic Human Generation with Latent Structural Diffusionby@homology

HyperHuman: Hyper-Realistic Human Generation with Latent Structural Diffusion

by Homology Technology FTW5mNovember 24th, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

HyperHuman introduces a new standard in hyper-realistic human image generation by leveraging a large-scale dataset, HumanVerse, and a Latent Structural Diffusion Model. It integrates structural and textural details to create highly realistic human images with diverse layouts.
featured image - HyperHuman: Hyper-Realistic Human Generation with Latent Structural Diffusion
Homology Technology FTW HackerNoon profile picture
0-item

Authors:

(1) Xian Liu, Snap Inc., CUHK with Work done during an internship at Snap Inc.;

(2) Jian Ren, Snap Inc. with Corresponding author: [email protected];

(3) Aliaksandr Siarohin, Snap Inc.;

(4) Ivan Skorokhodov, Snap Inc.;

(5) Yanyu Li, Snap Inc.;

(6) Dahua Lin, CUHK;

(7) Xihui Liu, HKU;

(8) Ziwei Liu, NTU;

(9) Sergey Tulyakov, Snap Inc.

Abstract and 1 Introduction

2 Related Work

3 Our Approach and 3.1 Preliminaries and Problem Setting

3.2 Latent Structural Diffusion Model

3.3 Structure-Guided Refiner

4 Human Verse Dataset

5 Experiments

5.1 Main Results

5.2 Ablation Study

6 Discussion and References

A Appendix and A.1 Additional Quantitative Results

A.2 More Implementation Details and A.3 More Ablation Study Results

A.4 More User Study Details

A.5 Impact of Random Seed and Model Robustness and A.6 Boarder Impact and Ethical Consideration

A.7 More Comparison Results and A.8 Additional Qualitative Results

A.9 Licenses

ABSTRACT

Despite significant advances in large-scale text-to-image models, achieving hyperrealistic human image generation remains a desirable yet unsolved task. Existing models like Stable Diffusion and DALL·E 2 tend to generate human images with incoherent parts or unnatural poses. To tackle these challenges, our key insight is that human image is inherently structural over multiple granularities, from the coarse-level body skeleton to the fine-grained spatial geometry. Therefore, capturing such correlations between the explicit appearance and latent structure in one model is essential to generate coherent and natural human images. To this end, we propose a unified framework, HyperHuman, that generates in-the-wild human images of high realism and diverse layouts. Specifically, 1) we first build a largescale human-centric dataset, named HumanVerse, which consists of 340M images with comprehensive annotations like human pose, depth, and surface-normal. 2) Next, we propose a Latent Structural Diffusion Model that simultaneously denoises the depth and surface-normal along with the synthesized RGB image. Our model enforces the joint learning of image appearance, spatial relationship, and geometry in a unified network, where each branch in the model complements to each other with both structural awareness and textural richness. 3) Finally, to further boost the visual quality, we propose a Structure-Guided Refiner to compose the predicted conditions for more detailed generation of higher resolution. Extensive experiments demonstrate that our framework yields the state-of-the-art performance, generating hyper-realistic human images under diverse scenarios

1 INTRODUCTION

Generating hyper-realistic human images from user conditions, e.g., text and pose, is of great importance to various applications, such as image animation (Liu et al., 2019) and virtual try-on (Wang et al., 2018). To this end, many efforts explore the task of controllable human image generation. Early methods either resort to variational auto-encoders (VAEs) in a reconstruction manner (Ren et al., 2020), or improve the realism by generative adversarial networks (GANs) (Siarohin et al., 2019). Though some of them create high-quality images (Zhang et al., 2022; Jiang et al., 2022), the unstable training and limited model capacity confine them to small datasets of low diversity. Recent emergence of diffusion models (DMs) (Ho et al., 2020) has set a new paradigm for realistic synthesis and become the predominant architecture in Generative AI (Dhariwal & Nichol, 2021). Nevertheless, the exemplar text-to-image (T2I) models like Stable Diffusion (Rombach et al., 2022) and DALL·E 2 (Ramesh et al., 2022) still struggle to create human images with coherent anatomy, e.g., arms and legs, and natural poses. The main reason lies in that human is articulated with nonrigid deformations, requiring structural information that can hardly be depicted by text prompts. To enable structural control for image generation, recent works like ControlNet (Zhang & Agrawala, 2023) and T2I-Adapter (Mou et al., 2023) introduce a learnable branch to modulate the pre-trained DMs, e.g., Stable Diffusion, in a plug-and-play manner. However, these approaches suffer from the feature discrepancy between the main and auxiliary branches, leading to inconsistency between the control signals (e.g., pose maps) and the generated images. To address the issue, HumanSD (Ju et al., 2023b) proposes to directly input body skeleton into the diffusion U-Net by channel-wise concatenation. However, it is confined to generating artistic style images of limited diversity. Besides, human images are synthesized only with pose control, while other structural information like depth maps and surface-normal maps are not considered. In a nutshell, previous studies either take a singular control signal as input condition, or treat different control signals separately as independent guidance, instead of modeling the multi-level correlations between human appearance and different types of structural information. Realistic human generation with coherent structure remains unsolved.


Figure 1: Example Results and Visual Comparison. Top: The proposed HyperHuman simultaneously generates the coarse RGB, depth, normal, and high-resolution images conditioned on text and skeleton. Both photo-realistic images and stylistic renderings can be created. Bottom: We compare with recent T2I models, showing better realism, quality, diversity, and controllability. Note that in each 2×2 grid (left), the upper-left is input skeleton, while the others are jointly denoised normal, depth, and coarse RGB of 512 × 512. With full model, we synthesize images up to 1024 × 1024 (right). Please refer to Sec. A.7, A.8 for more comparison and results. Best viewed with zoom in.


In this paper, we propose a unified framework HyperHuman to generate in-the-wild human images of high realism and diverse layouts. The key insight is that human image is inherently structural over multiple granularities, from the coarse-level body skeleton to fine-grained spatial geometry. Therefore, capturing such correlations between the explicit appearance and latent structure in one model is essential to generate coherent and natural human images. Specifically, we first establish a large-scale human-centric dataset called HumanVerse that contains 340M in-the-wild human images of high quality and diversity. It has comprehensive annotations, such as the coarse-level body skeletons, the fine-grained depth and surface-normal maps, and the high-level image captions and attributes. Based on this, two modules are designed for hyper-realistic controllable human image generation. In Latent Structural Diffusion Model, we augment the pre-trained diffusion backbone to simultaneously denoise the RGB, depth, and normal. Appropriate network layers are chosen to be replicated as structural expert branches, so that the model can both handle input/output of different domains, and guarantee the spatial alignment among the denoised textures and structures. Thanks to such dedicated design, the image appearance, spatial relationship, and geometry are jointly modeled within a unified network, where each branch is complementary to each other with both structural awareness and textural richness. To generate monotonous depth and surface-normal that have similar values in local regions, we utilize an improved noise schedule to eliminate low-frequency information leakage. The same timestep is sampled for each branch to achieve better learning and feature fusion. With the spatially-aligned structure maps, in Structure-Guided Refiner, we compose the predicted conditions for detailed generation of high resolution. Moreover, we design a robust conditioning scheme to mitigate the effect of error accumulation in our two-stage generation pipeline.


To summarize, our main contributions are three-fold: 1) We propose a novel HyperHuman framework for in-the-wild controllable human image generation of high realism. A large-scale humancentric dataset HumanVerse is curated with comprehensive annotations like human pose, depth, and surface normal. As one of the earliest attempts in human generation foundation model, we hope to benefit future research. 2) We propose the Latent Structural Diffusion Model to jointly capture the image appearance, spatial relationship, and geometry in a unified framework. The Structure-Guided Refiner is further devised to compose the predicted conditions for generation of better visual quality and higher resolution. 3) Extensive experiments demonstrate that our HyperHuman yields the state-of-the-art performance, generating hyper-realistic human images under diverse scenarios.


This paper is available on arxiv under CC BY 4.0 DEED license.