paint-brush
ZeroShape: The Inference on AI-Generated Imagesby@fewshot
New Story

ZeroShape: The Inference on AI-Generated Images

tldt arrow

Too Long; Didn't Read

To test the out-of-domain generalization ability, we generate images of imaginary objects as the input to our model (see Fig. 10).
featured image - ZeroShape: The Inference on AI-Generated Images
The FewShot Prompting Publication  HackerNoon profile picture

Abstract and 1 Introduction

2. Related Work

3. Method and 3.1. Architecture

3.2. Loss and 3.3. Implementation Details

4. Data Curation

4.1. Training Dataset

4.2. Evaluation Benchmark

5. Experiments and 5.1. Metrics

5.2. Baselines

5.3. Comparison to SOTA Methods

5.4. Qualitative Results and 5.5. Ablation Study

6. Limitations and Discussion

7. Conclusion and References


A. Additional Qualitative Comparison

B. Inference on AI-generated Images

C. Data Curation Details

B. Inference on AI-generated Images

We present additional results of ZeroShape using images generated with DALL·E 3. To test the out-of-domain generalization ability, we generate images of imaginary objects as the input to our model (see Fig. 10). Despite the domain gap to realistic or rendered images, ZeroShape can faithfully recover the global shape structure and accurately follow the local geometry cues from the input image. These results also demonstrate the potential of using ZeroShape in a text-based 3D generation workflow.


This paper is available on arxiv under CC BY 4.0 DEED license.

Authors:

(1) Zixuan Huang, University of Illinois at Urbana-Champaign and both authors contributed equally to this work;

(2) Stefan Stojanov, Georgia Institute of Technology and both authors contributed equally to this work;

(3) Anh Thai, Georgia Institute of Technology;

(4) Varun Jampani, Stability AI;

(5) James M. Rehg, University of Illinois at Urbana-Champaign.