paint-brush
Neural Tech and Brain Computer Interfaces (BCI) in Video Games: An Overviewby@rizstanford
2,465 reads
2,465 reads

Neural Tech and Brain Computer Interfaces (BCI) in Video Games: An Overview

by Rizwan VirkAugust 4th, 2020
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Rizwan Virk attended a virtual conference on the use of neuro technology and BCI (Brain Computer Interfaces) in gaming. He says BCI’s played an important role in my roadmap to the Simulation Point, which I described in my book, The Simulation Hypothesis. The first thing I noticed was that all of these companies were working on non-invasive neural interfaces. This is in contrast to Neuralink, company funded by Elon Musk which is creating a chip that can be implanted into the brain by drilling a small hole into the skull.

People Mentioned

Mention Thumbnail

Companies Mentioned

Mention Thumbnail
Mention Thumbnail

Coins Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Neural Tech and Brain Computer Interfaces (BCI) in Video Games: An Overview
Rizwan Virk HackerNoon profile picture

Recently, I attended a virtual conference on the use of neuro technology and BCI (Brain Computer Interfaces or BMI, Brain Machine Interfaces) in gaming, put on by NeurotechX.

I’ve been casually following the space for some time since BCI’s played an important role in my roadmap to the Simulation Point, which I described in my book, The Simulation Hypothesis. At this theoretical point, we would be able to beam simulations directly into the brain and get responses from the brain, just like in the movie, The Matrix.

This discussion made me recall the first time I really thought semi-seriously about BCIs. It was while watching the 80s movie Firefox, starring Clint Eastwood as a spy/pilot who is tasked with stealing a super-secret jet from the Soviet Union. What made the jet special was that it could respond to the mental commands of the pilot; the idea was that if you could cut off even a second (or perhaps milliseconds) of response time, the fighter jet would have an advantage over NATO’s fighters, which still used handheld controls. The one catch when Eastwood finally steals the jet is that it won’t respond to his commands — unless he thinks in Russian!

While we’re not at either Matrix-level or Firefox-level BCI’s just yet, there were presentations from a number of interesting companies which helped shed light on the current state of the industry and methods.

The first thing I noticed was that all of these companies were working on non-invasive neural interfaces. This is in contrast to Neuralink, company funded by Elon Musk which is creating a chip that can be implanted into the brain by drilling a small hole into the skull. This much vaunted chip promises magical properties — including stimulating the brain’s pleasure centers and increasing human intelligence with AI. The first versions, though will be targeted towards those with brain disorders. Personally, I don’t know of any gamers who are ready to drill holes into their heads at this time, but that may change in the near future!

Figure 1: In the Matrix, humans could plug into simulations via a connecting port and wire

More importantly, unlike in the Matrix, which had a full two-way connection with the brain, these companies were mostly focused on getting input from the player through brain (or biofeedback or muscle) interfaces, though one of them also was interested in having the user receive signals to the body for a more immersive experience.

There were six companies doing interesting things in the field of Neuro-gaming tech that presented at the virtual conference, in two sessions:

Valve — The first presentation was by Mike Abinder, an experimental psychologist associated with the University of Washington, who has been working at Valve on some mysterious thing related to BCI and gaming. He basically gave an overview of how BCI’s could help in developing and improving gameplay experiences. While Abinder was coy about what Valve was actually doing in this space, he did make some good arguments about real-time data that could be gathered on the player’s experience and emotions while playing, which could lead to a whole new level of adaptive gameplay. He also gave an overview of “what we can measure today” — which was mostly EEGs in different permutations — his technical terminology was that we can measure “EEG correlates of Task/Engagement and Mental Workload in Vigilance, Learning and Memory Tasks”. What I got out of it was that the basic idea is to measure the EEG signals coming from the brain while the player is doing different things — competitive play, strategy games, etc., measuring things such as “valence, arousal and dominance".


Figure 2: The new EEG device from Brainattach that is touted as Biotactical Gameware

Brainattach — The second presentation was from Maciej Rudzinski of Branattach, who delivered a talk on “Introducing Survival Instinct to the game play”, and we now moved to talking about hardware. Brainattach has created a personal EEG device that is specifically for gamers which they call “Biotactical Gameware”. It’s basically a biofeedback device that can be used to improve your performance by “syncing” your emotions and state of mind with your gameplay. Biofeedback has been around for a while in many forms — consider that if you are measuring your heart rate while you breathe to see how it changes, you are basically doing. Biofeedback . Brainattach does this with EEG signals from its clinical grade-device. His presentation wasn’t so much about his device, as much as it was similar to Valve’s, about the ways that EEG signals and other biofeedback can be used by gamers and game developers to maintain the “flow state” — that state in between “frustration” and “boredom” that makes playing a game fun. Their device has not yet been released but you can sign up here at www.brainattach.com

Figure 3: The fully immersive haptic Teslasuit provides a full immersion

Teslasuit — The most fun presentation on the first day came from Dmitri Mikhalchuk of Teslasuit, company that is technically not a BCI at all — unless you replace the word “brain” with “body”. for a BBI. The Teslasuit is a fully immersive haptic suit that, when combined with a VR headset, is meant to not only immerse you in a world, but let you feel the gameplay through stimulating different parts of the body. The Teslasuit, which debuted and made waves at CES in 2018 and 2019, looks more like something straight out of Ready Player One. The way that it works is that it emits electric impulses to different parts of your body, and uses and advanced motion capture system and biometrics to see what you are doing as input. As far as I could tell, it wasn’t using EEGs or brain interfaces strictly, but the field seems to merge with biometric feedback.

In addition to the full suit, they have introduced a high fidelity haptic glove, which let you feel objects you are seeing in virtual reality, and combined with the Luminas XR glasses, let you get a fully immersive experience. For example, with a racing car demonstration the company did, you could not just see the road as you moved, but feel the g-forces on different parts of your body. In 2019, to show off the potential, they demonstrated the “contactless tackle” or “haptic tackle” demo They had a rugby player with motion capture on simulating tackling another player. While motion capture has generally been used to make animations more realistic, they were really interested to know which parts of the body make contact with the opponent during the tackle. Now, anyone who puts on the Teslasuit can experience what it’s like being tackled by a professional Rugby player! While innovative, I think I’ll pass as those rugby players look pretty fit! But you can see the gaming applications that could be built. The suit, which was originally funded on Kickstarter, costs several thousand dollars — read more about it here.

During the second session, there were presentations from three more companies, which shed even more light on the state of the industry and some of the challenges to get wider adoption of BCI’s.

Figure 4: Getting input for gaming today (controller) vs the future, Brain and Biometrics, via Erin Reynolds of Flying Mollusk

Flying Mollusk — Flying mollusk’s founder and CEO, Erin Reynolds, presented, giving a history of her company and the state of BCI and Biofeedback from a game developers perspective. She built a version of biofeedback based games as part of an academic project, and then started Flying Mollusk after graduating to build Nevermind, which was released on steam a few years later. Originally she was measuring heart-rate and other biometric feedback and using this feedback to increase or decrease the rate of “stress” that the player experiences — just the kind of “dynamic adaptability" that Mike Abinder from Valve was talking about. She then also included the Affective Emotions SDK, which can help a game developer detect what emotions the player is feeling using AI techniques. It turns out there is a whole industry of companies that use facial recognition techniques to look at a user’s face and detect whether they are happy or sad or angry — called Emotion API’s or affective computing. It wasn’t clear how much actual Brain interfaces were being used in her games, but she definitely mentioned EEGs and brain input as a future area for development. Erin also pointed out some of the obstacles to BCI’s getting adopted in gaming — namely that there aren’t enough sensors out there and there needs to better tools for game developers. Find out more about Flying Mollusk and their game Nevermind, here.

Figure 5: The evolution of Hardware for BCI’s according to Neurable’s Adam Molnar

Neurable — next cam Adam Molnar from Neurable, a company which has had impressive demos of being able to move objects in VR through thought alone. Neurable, which grew out of research at the University of Michigan, is a Boston based company that is bringing neural interfaces to VR worlds with modified VR headsets that can detect EEG signals. That’s not all they use — they also use eye tracking information and other information, though the focus is on different EEG signal processing. Adam gave an overview of the history of BCI for gaming and the road left to travel. This included devices like the Star Wars Force Trainer from Mattel, which let you feel like you were actually using the Force.

Figure 6: Adam Molnar talked about the history of BCI in games, including the Star Wars Force Trainer!

Adam, like many of the other panelists, believes that mind interfaces are the next logical step in gaming interfaces which went from typed commands to joystick/mouse, to gesture and touch control. Adam also gave an overview of their history development, from wet sensors in the lab to dry sensors that could be attached to VR, to the future which will include integrated sensors. One of the dangers that Adam pointed out for BCI as an industry was the setting of expectations too high. When people saw the Neurable demo a few years ago, it’s easy to assume that BCI is really here — when there is a lot of work still to be done. This “lofty expectations” can actually be a problem because the reality rarely matches the hype that can come from these demos. In other words, it’s easy to think we are at Firefox like mind-reading, but we really aren’t there yet and the whole field of BCI could be labelled as “missing its potential” and/or a “dud”. Neurable's goal is an "everday BCI" - see more here.

Figure 7: The Brink Bionics Glove senses movements and gives gamers an edge!

Brink Bionics — the Final presentation was like taking the Firefox like usecase and making it very specific to gaming, given by Erik Lloyd of Brink Bionics. They assert that in games which require fast responses and spectacular hand-eye coordination, like in first person shooters and similar genre action games, the milliseconds that it takes for the signal to go from the brain to the hands to the mouse/keyboard/joystick/input device, can make a difference. Their technology, which fits into a fingerless glove, can detect small electrical impulses in the nerves ,resulting in muscle twitching which indicates what command the user has decided to implement in the game. Rather than using a mouse, though, their product can translate the muscular input directly into an existing command in existing games! This is perhaps the most practical of the products on display — although there is new hardware, you don’t have to reprogram existing games, since they convert the muscle signals to existing commands form the mouse or other controllers. Find out more here.

Figure 8: Erik Lloyd talked about the state of BCI today vs. the future

Erik also talked about the evolution of neural technology in gaming and made the point that today we are in phase I — where a stimulus is presented by the game to the user, and then we measure the result ( or the reaction of the user), and then map it to some known reactions. He calls this “dependent” BCI and that there will eventually be “independent” BCI which wont’ require presenting a scenario to the user (as in VR) but will allow machines to directly pick up on any of our thoughts and we can use those thoughts to control the machines.

Conclusion

Erik’s distinction between “dependent” and “independent” BCI was perhaps a good place to end the discussion of the state of the industry today. You’ll notice I watched the conference not in VR or AR or through a BCI, but simply on Youtube.

Today’s BCIs use a combination of biofeedback (heart rate, skin/stress response), affective computing (emotions based on facial recognition), EEG signals detection (and there is a lot of research on different ways to interpret these signals), eye tracking (for intention inside VR), motion capture (for the teslasuit), and electrical impulses to simulate touch. While the state of the industry is no longer research lab only, it still has a long way to go to directly control any game from your mind. It may not be long that the impact of these technologies goes way beyond gaming — who knows there may be a Firefox like fighter jet on the way!


A graduate of MIT and Stanford, Rizwan Virk is a futurist, an entrepreneur, venture capitalist, video game pioneer, indie filmmaker and bestselling author. His books include The Simulation Hypothesis: An MIT Computer Scientist Shows Why AI, Quantum Physics and Eastern Mystics Agree We Are In A Video Game, and Startup Myths & Models What You Won’t Learn in Business School. He is the founder of Play Labs @ MIT, and is a venture partner at several venture capital firms, including Griffin Gaming Partners. Visit his website at www.zenentrepreneur.com