Credit: Disney Research’s Magic Bench
In the 1970s the introduction of the microcomputer sparked a sea change in computing experiences. Up to that point, human-machine interaction had been shaped by the mainframe: one “big iron” machine kept in a fixed, central location and shared across many users. But then the microcomputer changed everything: computing became personal, powered by smaller and cheaper machines that could now be kept close at hand and used one-on-one.
The arc of the last four decades — from yesterday’s microcomputers and PCs to today’s smartphones and wearables to tomorrow’s implantables and ingestibles — has largely been about realizing the potential of personal computing. For all the industry’s incredible progress, the personal nature of computing (close-at-hand, one-on-one) hasn’t fundamentally changed.
But this may not be true for much longer. The next wave of emerging technologies, platforms, and behaviors is finally pulling the world in a new direction: spatial computing_._ Machines, no longer attached to us, instead occupy space with and around us (physically or virtually) and can be used by many people at the same time (“multi-player mode”). A whole new possibility space of experiences that break the “personal computing” mold is opening up.
Just like personal computing, spatial computing emerges from the interplay of core technology advances (e.g. sensors, compute, AI/ML, 3D capture and rendering, displays), “post-mobile” behaviors and modalities (e.g. text, voice, gesture, AR/VR), and new contexts for computing (e.g. on wrists, eyes, ears; in kitchens and living rooms, office floors and conference rooms, cars). Several experiential principles for spatial computing are only now becoming technically feasible, economically viable, and behaviorally desirable:
Relatedly, these principles (and spatial computing itself) are very much intertwined with the world of intelligent agents + things and generations of people growing up talking with machines I’ve already written about before.
If the microcomputer signaled the breakout of personal computing, what now signals the breakout of spatial computing? It’s still too early — and entirely possible this never plays out — but there are a few obvious themes to watch:
Augmented & Virtual Reality Virtual experiences in physical spacesInteractive 3D characters, objects, and overlays that live anywhere in your field of view and are rendered through a display, magic lens, projector, etc.
**Robots & IoThings**_Physical experiences in physical spaces_Hardware appliances and robots that live and roam in real-world spaces and physically interact with both people and the space around them.
**Chatbots & Assistant Apps**_Virtual experiences in virtual spaces_Bots that live in chat groups, email threads, conference calls, code repos, etc. as additional participants and interact with both people and content.
While each theme reflects a distinct approach to spatial computing, the boundaries between them are actually quite blurry e.g. voice assistant-enabled appliances, chat-based VR characters, AR-projecting robots, and so on. The breakout category for spatial computing may ultimately be a hybrid of multiple themes, or even a new theme that hasn’t yet surfaced.
None of this is to say personal computing is going away. It’s not. The themes here are not inherently restricted to spatial computing and span many personal computing use cases as well, like setting a timer with a voice assistant or watching a movie in VR. But it is to say there’s greenfield potential for experiences that couldn’t exist before — a companion robot that helps siblings resolve conflict, or a meeting assistant that interrupts calls to add missing context, or a choose-your-own-adventure story with AR characters on your tabletop — which in turn fuel new product categories.
If you’re thinking about or working on spatial computing, get in touch!