paint-brush
Why We Urgently Need to Optimize Neural Renderingby@ivanalts
765 reads
765 reads

Why We Urgently Need to Optimize Neural Rendering

by Ivan AltsybieievOctober 26th, 2022
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

In 2020, the industry's attention was drawn to a new solution for 3D representation – the NeRF method, which uses a neural network to recreate realistic 3D scenes. NeRF technology still needs improvement, but many industries are already waiting for it to enter the real market. In short, the invention of neural rendering may be as disruptive as Open AI’s GPT-3 release was. And although the technology still need improvement, many industries still need to be ready to use it.

Company Mentioned

Mention Thumbnail
featured image - Why We Urgently Need to Optimize Neural Rendering
Ivan Altsybieiev HackerNoon profile picture


In the late nineties, every teen hanging out in the first online games couldn’t even imagine that our fave Lineage and FIFA were just the beginning of a new age in virtual entertainment. Crappy visuals and character limitations didn't stop us from immersing in the game — having multiple goal-scoring as a one-dimensional David Beckham in FIFA ’98 was as cool as making super weird pixel mega builds in Minecraft.


And as the quality of computer graphics has evolved and virtual worlds have become more complex and impressive, user requests have grown in scale.


More fun, more realism, and more immersive interaction — that's what we want in the metaverse once we get access to it.


Gaming realism tries to keep up with films, but classic CGI still has many drawbacks and limitations despite the rapid advances made in recent years. It takes time and costs a lot on a large scale to make 3D characters on the screen look lifelike, and the results of transferring a realistic human appearance into the virtual world still look imperfect.


In 2020, the industry's attention was drawn to a new solution for 3D representation – the NeRF method, which uses a neural network to recreate realistic 3D scenes. In short, the invention of neural rendering may be as disruptive as Open AI’s GPT-3 release was. And although the NeRF technology still needs improvement, many industries are already waiting for it to enter the real market.


A sexy explainer of the NeRF methods in action, which has already been illustrated in most NeRF-related news:

NeRF: what's the point?

Classic CGI, which provides modeling of objects and spaces thanks to the approach of polygonal rendering, is constantly balancing the quality and speed of the result. After all, the more complex the scene, the more time and production capacity are required to reproduce it. Two years ago, the new neural radiance fields method (NeRF) broke into the industry, giving it significant advantages over all previously used 3D reconstruction tools.


Neural rendering allows you to get a photorealistic 3D character or scene quickly with maximum quality: a set of pictures of the same object from different angles is enough for the neural network to finalize the result.


Even though the quality of neural rendering is impressive, there are significant problems in terms of rendering speed, as it remains computationally heavy. These are the primary gatekeepers preventing the technology of neural rendering from entering the real market.


So while developers from various tech companies are trying to optimize neural rendering, I want to share my thoughts on where the implementation of hyperrealistic virtual imagery is most expected.


Games: more options for transferring identity

After the rise of face-swap technology, billions of synthetic media have been created worldwide. Until recently, face-swap closed the need for users to become anyone in the virtual world, but neural 3D rendering opens up even greater prospects for self-expression. And the gaming industry will be the first to offer new options for transferring digital identity. Let’s check out some examples.


  • 3D Celebs

Famous people are often invited to create game characters, but production studios still pour multi-million budgets and months of hard work into reproducing well-known faces in the game environments. Thanks to scaling neural 3D rendering in the market, transferring appearances from the real world to the game space will open up new genres and create a unique kind of storytelling.


Cyberpunk 2077 video game development including super detailed virtual Keanu Reeves cost over $300 million.Would you be able to resist having personalized gameplay of The Witcher as your fave 3D celeb, or hyper-realistic Luke Skywalker, whose avatar you buy and play to save the galaxy? I don’t think you could.


Sports simulators will have the same opportunities, where real athletes, whose faces have been used for many years in a row, will look hyper-realistic without extra costs for production.


  • Advanced character customization

Many role-playing games allow you to customize characters by adding unique details such as hair, make-up, clothes, or artifacts. Customization has become very detailed but is still based on templates and can’t completely copy realistic features.


The modern FPS games have mostly originated with Wolfenstein 3D — a rude male Caucasian avatar, with a military background. Whether you like it or not, military and related backgrounds are still one of the exclusive providers of FPS avatars, but now avatar capabilities are expanding far beyond military aesthetics. BTW, here's a geek vintage explanation of how the avatar customization in the FPS genre such as gender, race, background, and additional features have changed thanks to tech development and different events.


Transferring yourself into your fave games with just a few selfies — one of the upcoming options in gaming available with neural rendering.


Modeling your character based on your true appearance will help personalize the gaming experience to the maximum. Such pioneers already exist; their methods and the quality of the results still fall short, though.
LA-based company Possible Reality creates photo-realistic 3D avatars based on your photo to implement in the virtual space.

Communication: hyper-realistic digital avatars and emojis


Expressing emotions in digital communication has evolved over the past couple of decades. In the late nineties, releasing wordless language to express feelings became priceless for users — remember creating the first emoticons from brackets? And this shrug-face guy ¯\(ツ)/¯, a kaomoji made in Japan, is still a favorite among the old geeks.


In the following years, many tech companies customized their emoji sets, and once Apple added emojis to iOS messaging apps for its products in 2011, a new age of emojis began. Emojis for almost every taste appeared in messengers — considering gender, hobbies, race, and unique features. The language of emojis has developed alongside technology, and now animated avatars are at the peak of this evolution.


Personalized cartoonish avatars are available in messengers and social media apps, but they still don’t fully transfer our feelings into the digital world.

The possibilities for neural 3D rendering will soon allow us to look less like animated cartoons and more like natural people, opening a new stage of online communication. Expressing your emotions by leaving reactions with hyper-real 3D memes starring yourself, and generating your own 3D realistic sticker pack in a few taps will refresh our vision of social media.


Video production: simplify and make the process cheaper

Many novelties in the film production are created with the help of game engines — powerful real-time 3D creation tools predominantly used in the game industry. With their help, some episodes of Love, Death & Robots, 1899, Space Sweepers, The Mandalorian, and many other things were filmed in Star Wars and much more.


Building the world in House of the Dragon with Unreal Engine capabilities.
Previously, it took the VFX team months and incredible budgets to reproduce fantastic worlds and characters on a large scale and simulate lighting, physics, and movements.


In recent years, Deepfake technology has helped simplify and make the process cheaper. The next step is to make film production even better with neural 3D rendering, maintaining the quality of the final product and reproducing 3D characters and environments without the need to shoot in the real world.



Tools for creating realistic digital humans in the Metaverse


It’s no surprise that the gaming industry was the first to present the creation of realistic digital humans in 3D. At the beginning of 2021, Unreal Engine launched a MetaHuman Creator – a browser-based app that empowers game developers to build digital humans in less than an hour with the highest level of quality. The quality of the drawings is truly imposing.


Epic Games CEO says ”Unreal Engine is, I believe, tooling up to build metaverse experiences”. In April 2022, Sony and the owner of The Lego Group, KIRKBI, invested $2 billion into Epic Games to stay at the vanguard of developments in the Metaverse.

MetaHuman works as a very advanced editor in which you can mold a character from dozens of ready-made presets. To quickly turn yourself into a digital personality though, you would need to be a game designer or professional.


However, this is what we expect from future interactions in the Metaverse: everyone should be able to create a digital twin of themselves to continue doing the things they are used to doing in the real world.


Today, hundreds of millions worldwide enjoy AI tools for instant content creation and want to achieve even more in representing their identity in the virtual space.


Each breakthrough technology that was once complex and expensive becomes cheaper, more straightforward, and possible to fit in your pocket. Once machine learning brought editing 2D images to a new level of accessibility – now everyone can make a face swap on their phone in a few seconds or generate a new kind of AI art with the help of text input. In the same way, AI will help to make the creation of 3D images and characters widespread, including transferring users into digital humans to interact in the Metaverse.


We at Reface are working on new solutions for neural rendering optimization and looking forward to hearing more news from the market on how we can meet together in the 3D space.