The Next Big Thing Isn’t on Your Phone. It’s AI-Powered XR and It’s Already Taking Over. Part II

Written by romanaxelrod | Published 2026/01/09
Tech Story Tags: ai | ar | xr | smart-glasses | deep-tech | smart-contact-lenses | materials-science | hackernoon-top-story

TLDRThe next big thing in tech is AI-powered XR computing. But what form factor will it take? Which innovations will it require? via the TL;DR App

Every few decades, computing changes its shape. Once again, we’ve arrived at that moment. But what form factor will it take? Which innovations will it require? In the first part of this series, I broke down the signals identifying AI-powered XR computing as the next major leap in tech. In Part II, I’ll look at why Big Tech can’t deliver this on its own — and why making this vision a reality requires rethinking the foundations of science, engineering, and human behavior from the ground up.

XR Glasses: A Clever Workaround, Not the Destination

In the evolution of technology, true paradigm shifts often arrive disguised as impossible leaps, while stopgap solutions get mistaken for the endpoint. Today’s XR glasses are a perfect example – they captivate us, yet they’re ultimately just a transitional form factor.

Think of XR glasses as the PalmPilot of spatial computing: clever, ambitious, and impressive in its time. But ultimately, a stepping stone. The moment the real thing shows up (the iPhone-level leap), it’s over.

Major tech players seem content to iterate on legacy concepts, strapping ever-more electronics into headsets and glasses. In my previous article on Hackernoon, I called this The Pygmalion Curse: the industry's habit of endlessly reinventing Weinbaum’s near-century-old vision instead of moving forward.

These approaches are mostly incremental extensions of the familiar gadget model — devices we carry, wear, or put on our faces. But what if the category were reconsidered from the ground up? Instead of asking, “How do we adapt existing hardware for XR?”, a more useful question might be: “What form factor could genuinely disappear into everyday human experience?”

That line of thinking points away from visible devices and toward interfaces that operate at the threshold of perception. Exploring this requires advances in material science, nanophotonics, and optoelectronics — the kind of foundational work that allows new form factors to emerge, rather than simply refining old ones.

Why the Market Demands More Than Smart Glasses

We’ve witnessed remarkable progress in AR glasses: Meta’s latest Ray-Ban collaboration sold out repeatedly; Apple’s Vision Pro redefined spatial computing with its micro-OLED displays and advanced eye-tracking; and companies like XREAL and Rokid have made lightweight AR glasses genuinely wearable and affordable (I’m a big fan of XREAL’s Air 2 Pro). Each new generation of XR glasses is lighter, more capable, and more socially acceptable than its predecessor. But they all still require users to wear something on their face.

Fun fact: a 2024 Monash University survey found owners see smart glasses as boosting their self-image and social ties, while non-users fear privacy breaches and social disruption. Basically, smart glasses force people even further inside their own bubbles — a worrying result.

Some believe the best device is no device. This is why Mojo Vision’s AR contact lenses make sense. In 2022, Mojo CEO Drew Perkins became the first human to publicly wear a functioning AR contact lens — a minuscule green monochrome microLED display, less than half a millimeter across, packing 14,000 pixels per inch. He was able to see text, graphics, and data projected directly into his visual field. No glasses, no headset, no visible hardware whatsoever.

“The future is a lot closer than most people think,” he wrote afterward. “I’ve seen it. I’ve worn it. It works.”

Mojo Vision showed that one of AR’s toughest, most time-consuming problems — display technology — could be shrunk down to contact-lens scale. Of course, reality hit: the market wasn’t ready, supporting tech hadn’t evolved enough. Call it the curse of “too soon.”

Ultimately, the firm moved away from smart contact lenses but remained in the vision tech space. They focused instead on microLEDs — including smart glass displays — as a platform. The demo, though, remains a watershed moment — proof that what seemed like a cyberpunk fever dream might just become a reality.

Why Now Is the Moment: AI, Materials, and the Rise of Superhuman R&D

So far, fewer than 20 companies (not including XPANCEO) have publicly disclosed their own development of smart contact lenses and subsequent patent applications. Only a dozen or so resisted the urge to pivot and are still active (I covered several in my previous article). When Mojo Vision demonstrated their prototype, they proved the feasibility of the smart contact lens. What they couldn’t have anticipated was how dramatically the landscape would divert in just a few years. The difference between “too soon” and “right on time” often comes down to the surrounding technological ecosystem. The bottlenecks at the time were materials, manufacturing precision, and the R&D velocity required for nanoscale-level iteration. Today, those constraints are evaporating.

Consider the traditional materials discovery pipeline: researchers hypothesize a structure, synthesize candidates, test properties, publish results — an iteration cycle measured in months or years.

Now, the cycle is changing. Instead of random exploration through possibility space, modern computational approaches are able to predict stable atomic configurations, simulate optical properties before synthesis, and identify candidates that would have taken decades to discover through conventional means.

In 2023, Google DeepMind’s deep learning tool GNoME predicted ~2.2 million potential crystal structures, including ~380,000 likely stable materials — a tenfold increase over previously known crystals. This expanded search space drastically accelerated materials discovery, allowing Berkeley Lab to synthesize dozens of compounds in days instead of months.

Now the pace is even faster. Researchers are discovering entirely new classes of two-dimensional materials with unprecedented optical properties — materials non-existent in nature and inconceivable without search guidance from machine learning.

Today, a deep tech R&D team is more than just a hundred highly talented (and expensive) researchers locked in a lab. It’s a collective intelligence, combining human intuition with machine precision and effectively changing the game. Accomplishments once requiring 30 researchers and three years’ time now take about a month. The smarter the AI, the speedier the process.

At XPANCEO, our own AI platforms accelerate the R&D process. They act as tireless collaborators who learn on the job; identify optimizations we’d have missed; and exponentially multiply our team’s domain knowledge.

For deep tech, the meaning of this acceleration is profound. When the R&D cycle compresses by an order of magnitude, moonshots become feasible. Technologies likely to bankrupt companies in the past with exorbitant research costs can now be sustainably developed.

This is why smart contact lenses, a concept that struggled in the recent past, are finally possible in the present. We can now build, test, and iterate at superhuman speed — not just for XR contact lenses, but for a whole new generation of tiny, innovative devices powering ambient computing. This is a unique development, but not because someone became smarter or luckier; it’s a watershed moment because the very foundation of deep-tech innovation has shifted beneath us.

Deep Tech’s Advantage: Fundamental Science vs. the Status Quo

Technological revolutions follow a consistent pattern that often goes unnoticed: the most profound changes don’t come from iteration on existing paradigms but from rebuilding the foundations.

Big Tech excels at optimization, taking known technologies, manufacturing them at massive scale, and squeezing out incremental improvements through sheer resources. This strategy works perfectly — until it hits the ceiling imposed by underlying physics. Silicon transistors can only be so small. Batteries can only be so energy dense. Glass optics can only be so thin.

Deep tech companies operating at the physics level can entirely rewrite these limits. It’s an advantage deep tech enjoys over even the most resource-rich tech giants: working at the atomic and nanoscale levels eliminates the constraints of existing supply chains, manufacturing processes, and material properties. The “substrate” itself can be engineered.

This principle is well-established in pharma. Biotech startups regularly produce breakthrough therapies that pharmaceutical giants cannot, not because their resources are more plentiful, but because they’re tackling problems and offering solutions — gene therapy, CRISPR, mRNA vaccines — at a fundamental level.

Physics-based deep tech is capable of similar revolutionary jumps. Strategies demand longer development timelines and significant investments in upfront research, but when they pay off, the resulting competitive advantage is nearly unbeatable. New physics cannot be reverse-engineered. Materials that took years of AI-guided discovery to identify cannot be duplicated. Manufacturing processes existing nowhere else in the industry cannot be replicated.

Transforming proofs-of-concept into reliable, biocompatible, manufacturable products that consumers are actually willing to wear demands a development philosophy far removed from and fundamentally different than traditional consumer electronics.

Unlike phones, a smart contact lens doesn’t have the option of subsequently releasing “version 1.0” to eliminate bugs and apply patches. Medical device standards apply. Biocompatibility is non-negotiable. The device must be perfectly reliable and unobtrusive enough that users trust it with their eyesight. As such, clinical trials are certain to be extensive. As technology matures and regulatory pathways clear, broader consumer applications become feasible.

Deep tech companies are built for the long game. Looking ahead, 2026 will introduce a wider variety of XR form factors to the market. Beyond smart contact lenses, the next wave of XR interfaces will take many shapes, surpassing glasses, headsets, and handhelds entirely. Tiny, integrated devices — from retinal displays and implantable micro-projectors to ultra-compact wearables embedded in clothing and even skin patches — are entering the realm of possibility thanks to advances in nanophotonics, materials science, and AI-assisted design.

Each form factor aims to disappear into the background of daily life while delivering context-aware computing, immersive visuals, and health monitoring. Just as the contact lens transforms the human eye into an interface, these new devices will redefine how and where we experience digital information, making computing less about gadgets and more about seamless interactions woven into the human body and its environment. Through these interfaces, we’ll be experiencing a customized cognitive layer powered by cutting-edge AI algorithms that are constantly adapting to our needs, filtering information, and guiding real-time decisions.

In other words, the science fiction writers who warned, inspired, and occasionally spooked us were right: we’re finally becoming cyborgs — eyes as screens, skin as interface, AI as humanity’s invisible sidekick. That imagined future is already powering up.

This begs a deeper question: if our perception, memory, and intelligence can be shared, amplified, or even partially outsourced to machines, then how exactly do we define “human”?

But that’s a topic for another time


Written by romanaxelrod | Founder of XPANCEO, a deep tech company developing the next generation of computing via smart contact lens.
Published by HackerNoon on 2026/01/09