Your User Is Blindfolded and Swinging a Golf Club: Designing for VR

Written by laumski | Published 2026/03/16
Tech Story Tags: vr-applications | vr-design | meta-quest | spatial-ui | virtual-reality | product-design | vr-text-input | hackernoon-top-story

TLDRWhat product designers coming from web and mobile need to know about text input, physical safety, and spatial UI in VR, based on how apps like Beat Saber, FitXR, Supernatural, and NHL Sense Arena handle it.via the TL;DR App

When I moved from designing for web and mobile to designing for VR, my first reality check came from the most mundane thing imaginable: typing an email address. This takes less than ten seconds on a phone. But when you have to do it letter by letter, pointing trembling lasers at a floating virtual keyboard, that could take up to a minute.

According to multiple studies, controller‑based typing on VR keyboards averages about 15 to 25 words per minute, with error rates of around 7% or higher. A smartphone gets about 36 words per minute at roughly a 2% error rate, and a physical keyboard does about 35 to 65 words per minute with error rates around 1%.

A simple text entry in VR is roughly half the speed of a phone and three to four times more error-prone.

VR opens up lots of possibilities, but also makes some of the most basic things painfully hard. Below is a list of things that we take for granted in other media that need extra tender loving care when you’re working in VR.

Every Form Field Is a Roadblock

Twenty words per minute with a 7% error rate is already quite sad, and that's when the user is holding two controllers. It gets considerably worse when you remember that many of the most popular VR apps are built around sports like golf, boxing, baseball, cricket, hockey, fitness, and table tennis, where sports equipment enters the picture. A controller strapped to a bat attachment, slotted into a golf club grip, or clipped onto a hockey stick mount stops being an “input device” and becomes purely a “tracking device.”

Any text input now needs to be done with the remaining controller, one-handed, which is even slower. Of course, any time the user needs to enter their name, session number, or find their team, you can force them to unstrap the controller, type, re-mount it, and recalibrate mid-session. Not exactly a delightful user experience.

Despite the fact that Meta has built dictation into the Quest OS, and open-source speech recognition now runs locally on Quest 3 hardware, voice hasn't become the primary input method for search or text entry in any major VR app yet.

Meta has also been experimenting with surface-detected keyboards that turn a desk into a typing surface, and researchers at Meta and ETH Zurich managed to reach 37 words per minute with it. But these are still research prototypes, not something you can build a product around today.

So what do VR apps do about this?

They treat text input as a problem to solve before the headset goes on. Account creation, profile setup, subscription management, team configuration all happen on a phone or computer ahead of time. Supernatural routes the entire account setup through a mobile companion app. FitXR gives web subscribers a short activation code to type in VR instead of entering an email and password. NHL Sense Arena uses a PIN instead of a full password for headset login. Once the VR session begins, every interaction becomes point-and-click: drill pickers, course selectors, difficulty sliders, team lists, etc.

The principle is simple: assume you will not be able to ask the user to type anything once the VR session is in progress, and design backwards from there.

In practical terms, this means that most serious fitness, training, and productivity VR apps end up shipping a companion mobile app or a web portal alongside the headset experience. That’s not a nice-to-have or a bonus feature, but a core part of the product architecture.

Safety Is Your Problem Now

Here’s the second thing that surprised me: the safety system built into every Meta Quest headset is fundamentally two-dimensional. Guardian (or "Boundary," as Meta now calls it) simply draws a perimeter on the ground and warns you when you approach it. It has no idea how high your ceiling is, whether there's a fan above your head, or how close your hand is to a shelf on the wall.

This matters a lot for physical activity apps, because people in VR don’t usually hurt themselves by walking into walls. They punch ceiling fans and knock TVs off shelves. When a user mounts a controller to a bat, a golf club, or a hockey stick, their reach extends by two to three feet (60 to 90 cm), but Guardian has no way of knowing that. You can draw a perfect boundary on the floor and still smash a light fixture or hurt someone on the backswing.

VR‑related emergency room visits in the U.S. increased more than 4x between 2017 and 2021, and fractures accounted for about 30% of those visits. A study presented at CHI 2023 that analyzed videos from the r/VRtoER subreddit found that one of the most common accident types was people and pets walking into the play area. Even Meta's Space Sense feature, which shows silhouettes of people entering your space, doesn't help much. The headset's cameras are all mounted on the front and bottom of the device, so anyone approaching from behind is basically invisible. And even when Space Sense does detect someone in front of you, it might be ineffective during fast-paced activity like boxing, because by the time you see the outline, you’ve already swung.

Some apps are experimenting with mixed-reality passthrough modes where you can see your real room overlaid with the game. FitXR and Les Mills BodyCombat both offer this, but it’s marketed as an immersion feature, not a safety feature, and it’s optional.

There are straightforward design decisions that could help here. An app that knows the user is about to use a club, a stick, or any mounted equipment could require a larger play space during setup and refuse to start the session if the boundary is too small. It could prompt users to confirm their ceiling is clear, or remind them to clear the room of pets and children before a boxing session. The platform won't do any of this for you, so if your app involves physical movement, these checks need to be part of your product.

UI Without a Screen Edge

Working with UI in VR means rethinking the interaction patterns, layout conventions, and information architecture you’re used to from web and mobile. There’s no above-the-fold, no sticky header, no full-screen mode, no screen edge to anchor a nav bar to. There’s no scrolling in the traditional sense, as you navigate the interface by looking, turning, and reaching.

Distance becomes a real design variable because elements exist at actual physical distances from the user’s eyes, measured in dmm (distance-independent millimeters, where 1 dmm equals 1 mm at 1 meter from the user) rather than pixels. Placing something too close feels intrusive while placing it too far makes it unreadable.

Take a flat rectangular panel, for example. It works fine on a monitor because the screen is small relative to your distance from it, so the difference between the center and the edges is negligible. In VR, UI panels tend to fill a much larger portion of your visual field, and at that scale, a flat surface means the edges are physically farther from your eyes than the center. The text gets harder to read, the touch targets get harder to hit. Any wide UI in VR needs to curve to maintain equal distance from the user’s eyes.

The comfortable field of view for interaction in VR is much smaller than you’d expect.

The standard approach to laying out VR interfaces is equirectangular projection, which flattens the full 360-degree environment into a 2:1 rectangle. If you think of it as a canvas, the full space is 3600 by 1800 pixels. But the zone where a user can comfortably read and interact without turning their head is only the central 1200 by 600 pixels, about a third of the width and a third of the height. Everything outside that requires physical head movement, and interactions that require constant turning cause real neck fatigue over time.

This constraint becomes a design opportunity once you stop thinking in screens and pages. You have 360 degrees of real estate around the user, and the ability to place information at different distances and anchor it either to the world or to the user’s body. There are several main patterns that apps use:

  • World-anchored UI is fixed in the virtual environment, like a scoreboard mounted on a wall. It stays in place when the user turns away.
  • Body-anchored UI is attached to the user's torso direction. It follows you around but doesn't track your head. Turning your head to look around doesn't move the interface, but it's always there when you look forward again. This gives the user a stable reference point without locking something to their face.
  • Head-locked UI follows your gaze everywhere you look. This is generally considered bad practice because it causes nausea, but can be used sparingly for brief notifications that appear for a second and disappear.
  • Diegetic UI is information built directly into the game world itself, like health displayed on a weapon or training stats shown on a virtual wristwatch. It's the hardest approach to get right, but when it works, users don't even feel that they're looking at an interface.

None of these ideas is rocket science, but they do require you to change how you think about interaction compared to web and mobile. You need to spend hours in VR to learn what feels right in terms of distance, size, and placement, so you can start using depth and spatial position the same way you use color or animation on a flat screen.

The Patterns Are Being Written in Real Time

VR as a design space is still being developed and is changing fast. Meta ships new SDK features and interaction models almost every quarter. Hand tracking accuracy has improved significantly in the last two years, and mixed-reality passthrough is opening up interaction patterns that didn’t exist before. What VR is today is not what it’s going to be in a year, and that is a good news for designers moving into this space. It’s an opportunity to invent new interactions instead of just following tried and tired best practices.


Written by laumski | Product designer, founder of Reasonal | Senior Product Designer at Course Hero | Bain alumni | On Deck Founders (ODF15)
Published by HackerNoon on 2026/03/16