Authors:
(1) Hoon Kim, Beeble AI, and contributed equally to this work;
(2) Minje Jang, Beeble AI, and contributed equally to this work;
(3) Wonjun Yoon, Beeble AI, and contributed equally to this work;
(4) Jisoo Lee, Beeble AI, and contributed equally to this work;
(5) Donghyun Na, Beeble AI, and contributed equally to this work;
(6) Sanghyun Woo, New York University, and contributed equally to this work.
Editor's Note: This is Part 8 of 14 of a study introducing a method for improving how light and shadows can be applied to human portraits in digital images. Read the rest below.
Appendix
We constructed the OLAT (One Light at a Time) dataset using a light stage [10, 49] equipped with 137 programmable LED lights and 7 frontal-viewing cameras. Our dataset comprises images of 287 subjects, with each subject being captured in up to 15 different poses, resulting in a total of 29,705 OLAT sequences. We sourced HDRI dataset from several publicly available archives. Specifically, we acquired 559 HDRI maps from Polyhaven, 76 from Noah Witchell, 364 from HDRMAPS, 129 from iHDRI, and 34 from eisklotz. In addition, we incorporated synthetic HDRIs created using the method proposed in [31]. During training, HDRIs are randomly selected with equal probability from either real-world or synthetic collections.
We produced training pairs by projecting the sampled source and target lighting maps onto the reflectance fields of the OLAT images [10]. To derive the ground truth intrinsics, we applied the photometric stereo method [51] and obtained normal and albedo maps.
This paper is available on arxiv under CC BY-NC-SA 4.0 DEED license.