paint-brush
Generating New Realityby@arun.cthomas3
134 reads

Generating New Reality

by Arun C ThomasJune 30th, 2017
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

In-order to create new <a href="https://hackernoon.com/tagged/reality" target="_blank">reality</a>, first we need to define what reality is. Take a moment now and think, what is reality for you?

Companies Mentioned

Mention Thumbnail
Mention Thumbnail

Coin Mentioned

Mention Thumbnail
featured image - Generating New Reality
Arun C Thomas HackerNoon profile picture

In-order to create new reality, first we need to define what reality is. Take a moment now and think, what is reality for you?

The word meaning of reality is as follows.

Reality is the state of things as they actually exist, rather than as they may appear or might be imagined. Reality includes everything that is and has been, whether or not it is observable or comprehensible. A still broader definition includes that which has existed, exists, or will exist.

But most times, what we perceive as reality is assumptions, based on inputs from our senses along with our knowledge of real world and how our brain combines both to make sense for us.

Think about magic? Is magic real? We know magic is not real, we know its illusion but it feels natural to our eyes, but some thing is strange. We will see the magician walking in air and its so compelling to believe. So what is different here? What happens here is, even if we perceive things with our senses, if it doesn’t fit into our understanding of real world, we will consider it as magic or illusion or some times it can make us sick.

According to John Wheeler, famous theoretical physicist(and man who proposed the name black hole), speculated that,

Reality is made from information by observers with concessions

In fact quantum mechanics and relativity can explain reality in more complex ways. They even suggest that reality may not exist in the absence of an observer. The relation between reality and golden ratio is still a point of discussion in quantum physics. Even there are suggestions that what we perceive as reality may be controlled codes (for example Quasicrystal) and future may affect past in cyclic patterns.

Okay! Coming back to our understanding of reality, we have two main variables. One is input from our senses and the other is our knowledge of the world. In case of magic, our knowledge of the the world is broken (no one can walk in air, yeah it is magic, there is some trick). So we assume that some trick is involved.

What if we can go the other way around? I mean we will keep our knowledge of world intact, then we will manipulate our senses. Now our senses will receive the values our knowledge expects and we will be believing that its real.

Yes we can fool ourselves!

The modest of this technique is used in User Experience Design(UX). Say we have a long scrolling list, with thousand items, and at any given time we can see only 10 items on the screen. The program will only load those items which are currently visible on the screen (may be some more). This is very useful for reducing resource (like memory, cpu, data) consumed for generating the items for us to view. And when we scroll through the UI, other elements will be loaded incrementally. But at the end of the day what we perceive is that, there was always thousand items. At that moment our reality is controlled by a computer.

So this example only uses one sense (part of it) of ours, the vision. There are four more senses available to be exploited. If a system is able to utilise some technique(s) and provide correct input to all our senses, and it matches to our understanding of real world, then a new reality is generated. Of course, when we come out of that system we will realise that it was not real. Or a more advanced system can directly feed signals into our brain to generate a new reality.

When we are trying to generate a new reality it is important that we should be able to keep user in the state of flow. Flow is a very important term in general and in UX designing, where the ultimate goal is to provide the user a fully immersed experience. So if we are utilising the senses of generating new reality, the input to each sense should be in accordance with generating the flow. If we start breaking the flow, it will lead to problems like Virtual reality sickness, which is very similar to motion sickness. So the understanding of the technology it self can be helpful at times to reduce the magic factor and improve the flow.

The term Virtual Reality(VR) was coined in and popularised by Jaron Lanier, he is considered by many as the father of virtual reality. He works with Microsoft research as an interdisciplinary scientist. He is also know for books like You Are Not a Gadget and Who Owns the Future.

Now days virtual reality refer to a device which can control more than one of our senses. Mostly a head set which covers our vision and provides sound. Some of them can provide tactile feed back with special kind of devices. There are options which allow us to use our smartphones as the display screen. Google Cardboard can turn most of the android smartphones to a VR head set. Daydream is the successor to Google Cardboard, which is more matured. But daydream will require daydream enabled phones to work with. Those phones will have specialised hardware which ensure consistent high quality rendering (60 fps) for ensuring that user is in flow.

There are other options which include all the required hardwares with in the head set it self like vive, rift etc. They are coming with specialised hardwares (with 90 fps) for high quality rendering and audio along with customised input controls.

This is only one direction of generating reality. There is another approach which depends on a hand held device to provide more realistic experience by taking inputs from current environments. This technique of imposing elements on the current environment on our interface is called Augmented Reality(AR). This can be considered to be a virtual reality with more inputs from the real world. (In virtual reality every thing is virtual, in augmented real world is involved). ARkit from Apple and Tango from Google are the main players in AR space. Google Glass was one the initial project that triggered the excitement of augmented reality and lately able to pick up in business applications.

Progressively what we are seeing is that the gap between AR and VR is reducing with the implementation of Mixed Reality(MR) projects like Hololens. This may also led to the creation of more specialised fields like the Comradre project for multi person augmented reality. Also things like eye tracking for precision and more cameras for real world inputs in VR head sets indicates that this field is growing really fast.

Devices which combines AR and VR is now available in market, like Asus’ ZenFone AR. These kind of combinations may also tend to Extended reality where real-and-virtual environments is combined.

So now, coming back to our main stream Virtual Reality devices. What are the techniques they use to generate the reality for us. It should be more than just placing a panel close to our eyes and displaying contents. What are the techniques they use? How can we create a new reality now?

To create reality today we depend mainly on vision, audition and touch. If you take vision and audition only, we may feel that its like watching a movie. Yes its like that but experience is more immersive in case of virtual reality. The difference is that our context is more into the actual content so that we presume that its more real.

Lets go one layer deeper into the vision part of virtual reality. We see things using Binocular vision, using our two eyes which are three inches apart. This enables us to have a three dimensional vision of the environment. We can know the depth of items and actual position of elements in three dimensional space. ( oh that is one reason why we have dual camera smart phones ) Each eye can see about 120 degree, when used alone. And when combined they can see about 190 degree. Our brain combines the signal from both eyes to provide us the exact picture of environment.

Now you can hold your index finger one or two feet away from your eyes, and close one eye and have a look at it with other eye. And change the eye you use to observe the finger. And try repeating this. We will find that there is a change in the actual position of the finger when looked through different eyes. (The ratio of change may be also affected by our dominant eye. Yes there are left eyed and right eyed people.)

Now that we know some science, we can use 3D Stereoscopic Rendering to create the perception of depth. That means we will be showing two different images for each eyes and the delta between these images get processed in our brain to produce perception of depth. The closer the object the bigger the delta, so change in delta of the images can make the 3D moving effect. Also sensors like gyroscope, accelerometer etc can be used to track head movement and revel more content as we move our head. Like showing more sky when you look up and more earth when you look down. This can also enable 360 degree viewing experience.

One more interesting point to notice about VR enabled head sets is that the actual display sits very close to our eyes. This can make it very difficult for our eyes to process the light coming out from the display. We may not even be able to see things correctly at this distance. So VR head sets comes with specialised lenses which can correctly bend the light and make it easier for our eyes to see. They may use specialised fresnel lenses for accurate bending of light.

In order to produce high quality experience, we should make sure that the hardware and software used in devices enable high speed rendering and generate high quality content. Devices should have high quality display with above 60 fps consistent frame rate. Also devices should be able to maintain low persistence mode to avoid motion blur. Which means our eyes are exposed to actual content for a limited amount of time (one third of time) and this helps to avoid persistence eye problems.

Most software APIs for working with VR is build on top of Open Graphics Libary (Open GL) to access the power of Graphics Processing Unit (GPU). Using Open GL, we will be creating shaders which will process and display the actual stream of bytes in the rendering pipeline. Complexities of Open GL can be avoided to an extend using Unity VR or libraries like RajawaliVR.

Now days there is more focus on Vulkan Graphics API, which is launched by Khronos (developers of Open GL) to be the next generation OpenGL initiative. Vulkan can offer higher performance and more balanced CPU/GPU usage. Implementation Vulkan is already available on Android, iOS and Mac. These kinds of initiatives can make the VR experience more immersive in the coming days.

Improvement in audio technologies, Haptics and understanding of other human senses it self can improve the quality of reality that we will perceive. This along with the advancement in hardware and software technologies may generate a perfect immersive reality experience in near future.

In the upcoming days we will see lot of new kinds of reality around us (pokemongo is just a beginning). Those realities will be able to control more of our senses and get precise feed backs form us and respond. We will see them more in business and entertainment applications. May be some day they will be able to redefine our knowledge of real world to generate a completely new reality that persists. Or we will realise that reality is merely an illusion, a magic that we understand.

Thanks for your time.