Neural Nets + VR = Magic!

Written by samsniderheld | Published 2017/10/09
Tech Story Tags: machine-learning | creativity | user-interface | vr | product-design

TLDRvia the TL;DR App

A gif demonstrating how a Convolutional Neural Network can be used for a VR level editor type interface.

Seriously … It’s like Harry Potter.

TLDR; video here.

A while back, I wrote my first blog post in series about the intersection of AI, Creativity, and 3D content generation. This blog post is a continuation of that series.

My dream VR application is ultimately a seamless extension of my imagination. Sure it’s a lofty goal, but it is my intuition that machine learning techniques can help make this idea a reality.

Specifically, this blog post explores the use of convolutional neural networks to dramatically change interaction design in VR.

Designing VR in VR

Anyone that has designed a VR application will tell you that working in traditional 2D mediums will only get you so far. In order to make VR, you need to be in VR.

However, since VR is such a new medium, the industry supporting VR creation is also new. Companies like Unity and Epic are creating VR level editors, however they end up feeling just like VR desktops, menus upon menus entrenched in design principles for 2D mediums.

A demonstration of a VR editor.

For me, the process of creating a VR environment in real time needs to be fast and effortless. But how do we create such a system without making use of 2D menus?

What if we could draw what we wanted? What if instead of having to navigate through a series of options, the system can understand what I need?

Enter Machine Learning

If you think about it, by entering into a VR environment, everything you do within that environment is being translated into data. Everything you look at, every twitch of your arm, every action you take can potentially be recorded.

Dystopian cyberpunk ramifications aside, this is quite compelling from a ML standpoint because what any ML model needs is lots of quality data. For the purposes of this blog post we’ll be looking at how convolutional neural networks (CNN) can be used for gesture recognition designed to replace current 2D menu design principles.

CNN’s have a remarkable ability to recognize data that has any sort of spatial relationship, i.e. images. What if we could use this ability to create interfaces that figure out what you want?. For instance, instead of picking a prop from a long list of items, what if I just sketched it out?

I’m definitely not the first person to think of this. I’m essentially talking about Google Quick Draw or AutoDraw in VR. In fact companies like Adobe are already exploring this type of effect in their products. Check out this recent demonstration of ProjectQuick3D. While I’m not actually sure that these implementations use a CNN or something else, functionally it is the same.

My First Model

As a first step, I took three classes from the Quick Draw dataset, and used them to train a simple CNN. For the architecture, I took a basic MNIST example since the data for MNIST and Quickdraw is very similar.

Drawing a circle creates a sphere, while drawing a triangle creates a cube. For those that are detail oriented, I’m sorry, drawing a triangle on a trackpad is much easier than drawing a square :(

As a proof of concept, I started with 2D interface on my MacBook Pro. The network is trained on circles, squares, and triangles. The idea being that when a user draws a shape, an associated shape will be instantiated in 3D.

Knowing that the basic principle worked, it was time to take the whole thing into VR!

I decided to go with Leap Motion because I wanted the interactions to feel natural and effortless. While the tracking of leap motion wasn’t as precise as say a Vive controller, I found that once I accounted for Leap’s tracking quirks, the interactions felt very fluid.

A simple model with classes for “tree”, “bush”, and “flower”

So this was super cool, and it was quite surprising how well it worked. But here’s where I ran into the first problem with my assumptions.

Drawing was a very frictionless interaction, however only the first time you do it. For instance, imagine having to draw a tree sketch for every tree in a virtual forest. Some additional UI thinking was required.

What if instead of drawing a tree, I drew a square? In other words what if I mapped each object to a primitive shape?

“Circle” = “Bush”, “Square” = “Tree”, “Triangle” = “Flower”

This made the interaction easier, quicker, and very satisfying. However it’s easy to imagine how you could run out of primitive shapes quickly. Perhaps you could use numbers, but then we’re moving away from the idea of an effortless conjuring of a 3D object. If you have to remember an arbitrary mapping, it’s too hard.

My next step was to separate the drawing and placing mechanic. Switching modes is done by tapping different spheres aligned with my wrist. Above my hand is an icon indicating which object I have currently selected. This allows me to quickly and intuitively choose a new object and quickly and effortlessly place it.

Switching things up with a very hacky interface. Here you can see drawing and placement as two separate interaction modes.

It was time to really dive in CNN’s and start making a custom model that could use more classes. Since I’m still a ML newb, I used a relatively small data set of 11 classes. After manually playing around with the architecture for a day or two, I stumbled upon Hyperas, a library that would help automate my architecture optimization. The final model I used was this:

This resulted in above 95% accuracy for evaluation set. Now I had more classes, meaning I could magically pluck more objects out of thin air. However, since the classes from Quick Draw are seemingly arbitrary, I was left with a rather random model.

This place could use a house!

and some chairs!

we definitely need a pirate ship (sailboat)!

These kids aren’t going to drive themselves to school!

How about some flying lessons?

For a more fluid view check out the screen capture on youtube:

Conclusion

The experience of drawing objects out of thin air is quite magical. But is it useful?

Honestly, in its current form with only a few classes, no.

Yes it’s more intuitive, but it’s not faster then a standard but well designed menu system. What this system requires to be superior is a larger model, something that could recognize around 1000 classes. That’s a bit out of my reach unfortunately.

The ability to just draw up any object out of 1000, would indeed be faster then finding that same object in a standard menu system. However at this point, you could just use a voice command like “house”, and perhaps that would be more intuitive.

A drawing based system becomes the best solution for objects whose granularity is defined by simple visual features. Perhaps it’s easier and quicker to draw a dandelion or a sunflower, than it is for a speech recognition systems to process the audio of “dandelion” or “sunflower”. Or even a large and complex model that has granular classes like 20 varieties of flowers. Perhaps you can’t remember the name of something? Or what if you’re in a busy office setting and your audio samples become too noisy? It feels good to draw objects vs just saying words. Maybe a combination of the two works best?

In any case, while this prototype falls short of replacing a menu system, I think it succeeds in demonstrating the power of machine learning in a creative VR context or in the context of VR UI.

A 3D environment allows us to evolve past desktop based design and machine learning is definitely a powerful tool for user interaction. In the next few blog posts, I’ll explore other, potentially more powerful implementations of creative ML in VR.

👋🏾 Get to know the people and ideas shaping the products we use every day. Subscribe to Noteworthy— the product & design newsletter written by the Journal team.


Published by HackerNoon on 2017/10/09