paint-brush
ZeroShape: The Limitations We Are Facingby@fewshot
New Story

ZeroShape: The Limitations We Are Facing

tldt arrow

Too Long; Didn't Read

Due to computational resource limitations, we are not able to process and train our model on the full Objaverse dataset.
featured image - ZeroShape: The Limitations We Are Facing
The FewShot Prompting Publication  HackerNoon profile picture

Abstract and 1 Introduction

2. Related Work

3. Method and 3.1. Architecture

3.2. Loss and 3.3. Implementation Details

4. Data Curation

4.1. Training Dataset

4.2. Evaluation Benchmark

5. Experiments and 5.1. Metrics

5.2. Baselines

5.3. Comparison to SOTA Methods

5.4. Qualitative Results and 5.5. Ablation Study

6. Limitations and Discussion

7. Conclusion and References


A. Additional Qualitative Comparison

B. Inference on AI-generated Images

C. Data Curation Details

6. Limitations and Discussion

Due to computational resource limitations, we are not able to process and train our model on the full Objaverse dataset. Currently, the meshes from Objaverse we use only consist of 5% of Objaverse and 0.4% of Objaverse-XL objects. Based on the promising scaling properties of recent foundation models [12, 24, 61], we believe it will be valuable to explore the scaling properties of method.


Another limitation of our work is that we have not considered the modeling of object texture. Predicting textures of unseen surfaces is highly ill-posed and can greatly benefit from a strong 2D prior. Given the recent success of 2D diffusion models [48] and their application in optimization-based 3D generation methods [7, 11, 29, 34, 40, 59], we think it will be promising to initialize or regularize these methods with our shape prior, potentially boosting both the optimization efficiency and generation quality.


This paper is available on arxiv under CC BY 4.0 DEED license.

Authors:

(1) Zixuan Huang, University of Illinois at Urbana-Champaign and both authors contributed equally to this work;

(2) Stefan Stojanov, Georgia Institute of Technology and both authors contributed equally to this work;

(3) Anh Thai, Georgia Institute of Technology;

(4) Varun Jampani, Stability AI;

(5) James M. Rehg, University of Illinois at Urbana-Champaign.