Sceptics say current VR headsets are clunky, heavy, uncomfortable diving masks that won’t make their way to people’s homes. Well, I agree, but with one significant exception — the technology is moving lightning fast.
Even if you are a true VR evangelist with a tattooed Oculus or Vive logo on a shoulder, you have to admit current-gen headsets will never get mass adoption. Because for now, Virtual Reality means being tied up to an expensive PC in the prison cell of a Guardian system.
It’s just not that virtual freedom we were all dreaming about.
However, there is one crucial thing that we forgot. It’s called “progress.” And it smashes everything on its way.
In less than 2 years of VR-consumer-version era headsets became completely wireless, field of view doubled, and the resolution tripled.
In fact, VR technologies are evolving much faster than consumer versions of headsets can be released.
In the series of weekly articles called “Behold The Next Generation VR Technology” I will guide you through the world of the latest and most promising tech that will finally make VR the next computing platform.
Mostly on the early stage of development, all this tech will be implemented in consumer version headsets during the next 10 years. Some sooner, some later.
I divided this series into parts — one part for every vital aspect of VR technology. Here are some of them:
- Facial tracking — that’s what this piece is about;
- Full body immersion;
- Connection to reality;
- Mind control;
- Realistic graphics;
- Input controls;
In real life your face moves gazillion times a day(scientifically approved numbers): when you speak, smile, stare at something or open your mouth to make a picture in Snapchat. BUT in VR it looks like this at best:
It’s not the Uncanny Valley; it’s the whole damn Uncanny Grand Canyon.
So yeah, as you may have guessed Facial tracking is for social VR. The main problem is: current VR headsets just don’t have such tech inside.
Vigorously waving hands or choosing an emotion from the menu is an option and is used in many social experiences. But it’s not as seamless as VR should be.
Lots of companies understand it and push the technology forward.
One of the most consumer-ready solutions is an array of infrared cameras that capture the eye movement(we’ll talk about it a bit later) as well as lip and chin movements.
The IR-camera usually protrudes from the helmet in an upside-down unicornish style. It identifies the contours of your chin and tracks changes in its geometry; then it transfers those changes to virtual character’s face.
As you can see Oculus is definitely experimenting with such technology in its Oculus Research lab in Redmond, Washington.
The other way is to put sensors on the foam insert of an HMD, that can measure facial muscle activity through the skin. It’s called Electromyography(EMG). Companies like Mindmaze and emteq are working on it.
Such sensors “listen” for face muscle activation 1000 times per second using electrodes and analyze them with smart algorithms to create a neural signature of an individual’s expressions without training or calibration.
EMG is a faster and much more accurate way of tracking facial activity. However, unlike cameras, they track only those parts of the face where it fits tightly to the skin.
And what about the eyes?
That’s where an array of good old IR cameras come in handy.
You can already find this tech in a Fove VR HMD. They put 6 IR emitters and one IR camera per eye into an HMD. Its beams(you can’t see this light) make it easier to find your pupils and correspondingly track their motion.
After processing motion data with a special software, Fove knows where exactly you are looking in any virtual scene. Oculus also bought a company called EyeTribe that was working on a similar technology.
Another way to track your eyes is to use MEMS(Microelectromechanical systems) devices.
These are the microscopic devices that project and scan a low power beam of light across the eye and output the eye’s coordinates.
This technology needs less power than camera-based trackers and requires less computational effort while the tracking speed can be up to 10x faster than many eye-tracking cameras.
Intel recently bought a company called AdHawk that is specialized in eye- and gesture-tracking using MEMS devices.
Another cool feature this tech enables is an opportunity to predict where the player will be looking at to start to render this exact part of the scene. That is a part of the technique called Foveated Rendering. I’ll cover it in future episodes.
Now you know that facial expressions can be tracked and you won’t scare away all your virtual pals while playing in VRChat. Furthermore, it will make them feel that your Ugandan Knuckles avatar is alive.
And what’s more important, demonstrating your true emotions will finally become seamless.