Too Long; Didn't Read
Snapchat and the University of Southern California attacked in this new research. Neural Rendering is the ability to generate a photorealistic model in space just like this one, from pictures of the object, person, or scene of interest. It’s great that the generated model looks accurate with realistic shapes, but what about how it blends in the new scene? And what if the lighting conditions vary in the pictures taken and the generated. model looks different depending on the angle you look at it? This would automatically seem weird and unrealistic to us.