Introduction For two decades, our interaction with the digital world has been confined to a mere 5-inch screen and a single fingertip. But what if we could break free from these limitations and unlock the full spectrum of our innate human senses in our everyday computing? The past few years have witnessed a dramatic acceleration in the development of human-AI interface technologies, pushing the boundaries of how we interact with artificial intelligence. From immersive display technologies to intuitive wearable devices and ambitious AI-powered assistants, the landscape is rich with both groundbreaking innovations and valuable lessons from early attempts. Recent Announcements and Key Players: Meta Connect Announcements: Display Glass and Neuron Wristband Meta Connect Announcements: Display Glass and Neuron Wristband Meta’s annual Connect event has consistently served as a platform for showcasing their long-term vision for augmented and virtual reality. The introduction of “Display Glass” hints at a future where digital information seamlessly blends with our physical world, likely offering contextual overlays and interactive experiences without the bulk of traditional headsets. Complementing this is the “Neuron Wristband,” suggesting an advanced input method that could potentially interpret neural signals or subtle hand gestures, offering a more natural and less intrusive way to control devices and interact with AI. These developments underscore Meta’s commitment to building the foundational hardware for the metaverse, where human-AI interaction will be paramount. Apple’s AirPods Pro with Live Translation Apple’s AirPods Pro with Live Translation Apple’s iterative approach to innovation often involves integrating advanced AI capabilities into their widely adopted ecosystem. The “Live Translation” feature in AirPods Pro is a prime example of this, leveraging on-device and cloud AI to break down language barriers in real-time. This not only enhances communication but also demonstrates the potential for AI to act as a personal, omnipresent interpreter, seamlessly facilitating interactions in a globalized world. It highlights a focus on practical, everyday applications of AI that enhance user experience without requiring entirely new form factors. Google’s Continued Effort in Smart Glasses Google’s Continued Effort in Smart Glasses Google has a long history with smart glasses, from the ambitious but ultimately limited Google Glass to more recent enterprise-focused solutions. The “continued effort” suggests a persistent belief in the potential of head-mounted displays as a human-AI interface. Future iterations are likely to focus on improved form factors, enhanced AI capabilities for contextual information delivery, and more robust integration with Google’s vast array of services, including search, maps, and AI assistants. The challenge remains in finding the right balance between utility, social acceptance, and privacy. OpenAI Acquires IO (Jony Ive) OpenAI Acquires IO (Jony Ive) OpenAI’s acquisition of “IO,” a design collective led by former Apple chief design officer Jony Ive, is a significant strategic move. This signals a strong recognition within the leading AI research organization that the physical embodiment and user experience of AI systems are crucial for their widespread adoption and impact. Ive’s legendary focus on minimalist design, intuitive interfaces, and emotional connection to technology suggests that OpenAI is not just focused on developing powerful AI models, but also on crafting elegant and human-centric ways for people to interact with them, potentially leading to new categories of AI-powered devices and interfaces. Learning from Early Efforts: Failed experiments? Human AI Pin and Rabbit R1 Failed experiments? Human AI Pin and Rabbit R1 The Humane AI Pin largely failed due to a combination of technical shortcomings, a high price point, and a flawed value proposition. The device was criticized for being slow, unreliable, and prone to overheating. Its primary interface, a laser-projected screen on the user’s palm, was found to be finicky and difficult to use in bright light. Humane AI Pin Furthermore, the $699 price and a mandatory $24/month subscription fee were deemed exorbitant for a device that could not reliably perform basic tasks and lacked integration with common smartphone apps and services. Ultimately, the AI Pin failed to solve a significant problem for consumers and was widely seen as an inferior, redundant gadget compared to the smartphones already in their pockets. The Rabbit R1’s failure can be attributed to its inability to deliver on its core promises and its fundamental lack of purpose. The device was heavily marketed as a “Large Action Model”-powered tool that could control apps and services on the user’s behalf, but at launch, it only supported a handful of apps and failed at many basic tasks. Reviewers noted poor battery life, sluggish performance, and an awkward user interface. Rabbit The company’s claim that its device was not just a smartphone app was undermined when it was revealed that the entire interface ran on a single Android app, raising the question of why dedicated hardware was even necessary. The R1’s limited functionality, combined with its inability to compete with the capabilities of a modern smartphone, led many to conclude it was little more than a “half-baked” toy that did not justify its existence. Looking Ahead in this article: The evolution of human-AI interfaces is a dynamic field characterized by rapid experimentation and continuous refinement. How do you keep up with the latest developments and stay a step ahead of the curve? In the following chapters, we will start with a deep dive into the human-machine interface in the context of AI. This will be followed by an opportunity analysis for future AI-focused HMI, as well as an overview of 40 companies categorized by the senses they address. Hopefully, this will give you a bird’s-eye view of this fast-progressing industry and provide you with a roadmap to explore further according to your personal interests. Human Machine interface — a deep dive Comparative Table of Human Senses for HMI Sense Approx. Info Transfer Speed (bandwidth) Typical Latency (biological) Electronic Acquisition Difficulty Importance for HMI (why it matters) Vision ~10–100 Mbps equivalent (retina: ~1M ganglion cells × ~10 Hz avg firing; peak ~10⁸ bits/s raw, but compressed) ~10–50 ms (visual processing lag, saccade update ≈ 30–70 ms) Medium: cameras capture pixels easily, but depth, semantics, and robustness (lighting, occlusion) are hard Highest: most dominant sense; AR/VR, robot teleoperation, situational awareness. Hearing (Audition) ~10–100 kbps effective (20 Hz–20 kHz, dynamic range ~120 dB, compressed equivalent ~128 kbps MP3 quality) ~1–5 ms for cochlea–nerve, ~20–30 ms conscious perception Easy: microphones replicate frequency & amplitude well, but spatial hearing (3D localization, reverberation) is harder High: essential for speech, alerts, immersive UX; natural channel for AI assistants. Touch (Haptics, cutaneous) ~1–10 Mbps (skin has ~17,000 mechanoreceptors in hand; up to 1 kHz sensitivity) ~5–20 ms (nerve conduction 30–70 m/s) Hard: tactile sensors exist, but resolution, softness, temperature, multi-modal feel are challenging High: critical for manipulation, VR/AR realism, prosthetics. Proprioception (body position, muscle/joint sense) ~100–1000 kbps (dozens of muscle spindles & Golgi organs firing continuously) ~10–50 ms Hard: requires motion capture, IMUs, EMG, complex fusion Very High: essential for embodiment, robotics teleop, XR presence. Vestibular (balance, acceleration, rotation) ~10–100 kbps (3 semicircular canals + 2 otolith organs) ~5–10 ms (extremely fast reflex loop for balance) Hard: gyros/accelerometers replicate linear/angular acceleration, but inducing realistic vestibular feedback is very hard Medium–High: important for XR realism; mismatch causes motion sickness. Smell (Olfaction) ~1–10 bps (≈ 400 receptor types, slow temporal coding) ~400–600 ms (perceptual lag) Very Hard: requires chemical sensing or odor synthesis, limited replicability Low–Medium: niche (immersive VR, food, medical diagnostics). Taste (Gustation) ~1–10 bps (5 receptor types, slow integration) ~500–1000 ms Very Hard: chemical stimulation only, few practical electronic taste displays Low: niche (culinary VR, medical). Interoception (internal state: hunger, heartbeat, breath, gut signals) Low bandwidth (<1 bps conscious; autonomic streams richer but subconscious) Seconds–minutes Very Hard: bio-signals accessible via ECG, PPG, hormone sensors, but incomplete Medium: useful for health-aware HMIs, adaptive AI. Thermoception (temperature) ~1–10 kbps ~50–200 ms Medium–Hard: thermal actuators exist, but slow response & safety constraints Medium: enhances immersion, but not primary channel. Nociception (pain) Not a “data channel” but a strong aversive signal ~100–300 ms Not desirable: pain induction ethically problematic Low: only as safety feedback in prosthetics. Sense Approx. Info Transfer Speed (bandwidth) Typical Latency (biological) Electronic Acquisition Difficulty Importance for HMI (why it matters) Vision ~10–100 Mbps equivalent (retina: ~1M ganglion cells × ~10 Hz avg firing; peak ~10⁸ bits/s raw, but compressed) ~10–50 ms (visual processing lag, saccade update ≈ 30–70 ms) Medium: cameras capture pixels easily, but depth, semantics, and robustness (lighting, occlusion) are hard Highest: most dominant sense; AR/VR, robot teleoperation, situational awareness. Hearing (Audition) ~10–100 kbps effective (20 Hz–20 kHz, dynamic range ~120 dB, compressed equivalent ~128 kbps MP3 quality) ~1–5 ms for cochlea–nerve, ~20–30 ms conscious perception Easy: microphones replicate frequency & amplitude well, but spatial hearing (3D localization, reverberation) is harder High: essential for speech, alerts, immersive UX; natural channel for AI assistants. Touch (Haptics, cutaneous) ~1–10 Mbps (skin has ~17,000 mechanoreceptors in hand; up to 1 kHz sensitivity) ~5–20 ms (nerve conduction 30–70 m/s) Hard: tactile sensors exist, but resolution, softness, temperature, multi-modal feel are challenging High: critical for manipulation, VR/AR realism, prosthetics. Proprioception (body position, muscle/joint sense) ~100–1000 kbps (dozens of muscle spindles & Golgi organs firing continuously) ~10–50 ms Hard: requires motion capture, IMUs, EMG, complex fusion Very High: essential for embodiment, robotics teleop, XR presence. Vestibular (balance, acceleration, rotation) ~10–100 kbps (3 semicircular canals + 2 otolith organs) ~5–10 ms (extremely fast reflex loop for balance) Hard: gyros/accelerometers replicate linear/angular acceleration, but inducing realistic vestibular feedback is very hard Medium–High: important for XR realism; mismatch causes motion sickness. Smell (Olfaction) ~1–10 bps (≈ 400 receptor types, slow temporal coding) ~400–600 ms (perceptual lag) Very Hard: requires chemical sensing or odor synthesis, limited replicability Low–Medium: niche (immersive VR, food, medical diagnostics). Taste (Gustation) ~1–10 bps (5 receptor types, slow integration) ~500–1000 ms Very Hard: chemical stimulation only, few practical electronic taste displays Low: niche (culinary VR, medical). Interoception (internal state: hunger, heartbeat, breath, gut signals) Low bandwidth (<1 bps conscious; autonomic streams richer but subconscious) Seconds–minutes Very Hard: bio-signals accessible via ECG, PPG, hormone sensors, but incomplete Medium: useful for health-aware HMIs, adaptive AI. Thermoception (temperature) ~1–10 kbps ~50–200 ms Medium–Hard: thermal actuators exist, but slow response & safety constraints Medium: enhances immersion, but not primary channel. Nociception (pain) Not a “data channel” but a strong aversive signal ~100–300 ms Not desirable: pain induction ethically problematic Low: only as safety feedback in prosthetics. Sense Approx. Info Transfer Speed (bandwidth) Typical Latency (biological) Electronic Acquisition Difficulty Importance for HMI (why it matters) Sense Sense Sense Approx. Info Transfer Speed (bandwidth) Approx. Info Transfer Speed (bandwidth) Approx. Info Transfer Speed (bandwidth) Typical Latency (biological) Typical Latency (biological) Typical Latency (biological) Electronic Acquisition Difficulty Electronic Acquisition Difficulty Electronic Acquisition Difficulty Importance for HMI (why it matters) Importance for HMI (why it matters) Importance for HMI (why it matters) Vision ~10–100 Mbps equivalent (retina: ~1M ganglion cells × ~10 Hz avg firing; peak ~10⁸ bits/s raw, but compressed) ~10–50 ms (visual processing lag, saccade update ≈ 30–70 ms) Medium: cameras capture pixels easily, but depth, semantics, and robustness (lighting, occlusion) are hard Highest: most dominant sense; AR/VR, robot teleoperation, situational awareness. Vision Vision Vision ~10–100 Mbps equivalent (retina: ~1M ganglion cells × ~10 Hz avg firing; peak ~10⁸ bits/s raw, but compressed) ~10–100 Mbps equivalent (retina: ~1M ganglion cells × ~10 Hz avg firing; peak ~10⁸ bits/s raw, but compressed) ~10–100 Mbps equivalent (retina: ~1M ganglion cells × ~10 Hz avg firing; peak ~10⁸ bits/s raw, but compressed) ~10–50 ms (visual processing lag, saccade update ≈ 30–70 ms) ~10–50 ms (visual processing lag, saccade update ≈ 30–70 ms) ~10–50 ms (visual processing lag, saccade update ≈ 30–70 ms) Medium: cameras capture pixels easily, but depth, semantics, and robustness (lighting, occlusion) are hard Medium: cameras capture pixels easily, but depth, semantics, and robustness (lighting, occlusion) are hard Medium: cameras capture pixels easily, but depth, semantics, and robustness (lighting, occlusion) are hard Highest: most dominant sense; AR/VR, robot teleoperation, situational awareness. Highest: most dominant sense; AR/VR, robot teleoperation, situational awareness. Highest: most dominant sense; AR/VR, robot teleoperation, situational awareness. Hearing (Audition) ~10–100 kbps effective (20 Hz–20 kHz, dynamic range ~120 dB, compressed equivalent ~128 kbps MP3 quality) ~1–5 ms for cochlea–nerve, ~20–30 ms conscious perception Easy: microphones replicate frequency & amplitude well, but spatial hearing (3D localization, reverberation) is harder High: essential for speech, alerts, immersive UX; natural channel for AI assistants. Hearing (Audition) Hearing (Audition) Hearing (Audition) ~10–100 kbps effective (20 Hz–20 kHz, dynamic range ~120 dB, compressed equivalent ~128 kbps MP3 quality) ~10–100 kbps effective (20 Hz–20 kHz, dynamic range ~120 dB, compressed equivalent ~128 kbps MP3 quality) ~10–100 kbps effective (20 Hz–20 kHz, dynamic range ~120 dB, compressed equivalent ~128 kbps MP3 quality) ~1–5 ms for cochlea–nerve, ~20–30 ms conscious perception ~1–5 ms for cochlea–nerve, ~20–30 ms conscious perception ~1–5 ms for cochlea–nerve, ~20–30 ms conscious perception Easy: microphones replicate frequency & amplitude well, but spatial hearing (3D localization, reverberation) is harder Easy: microphones replicate frequency & amplitude well, but spatial hearing (3D localization, reverberation) is harder Easy: microphones replicate frequency & amplitude well, but spatial hearing (3D localization, reverberation) is harder High: essential for speech, alerts, immersive UX; natural channel for AI assistants. High: essential for speech, alerts, immersive UX; natural channel for AI assistants. High: essential for speech, alerts, immersive UX; natural channel for AI assistants. Touch (Haptics, cutaneous) ~1–10 Mbps (skin has ~17,000 mechanoreceptors in hand; up to 1 kHz sensitivity) ~5–20 ms (nerve conduction 30–70 m/s) Hard: tactile sensors exist, but resolution, softness, temperature, multi-modal feel are challenging High: critical for manipulation, VR/AR realism, prosthetics. Touch (Haptics, cutaneous) Touch (Haptics, cutaneous) Touch (Haptics, cutaneous) ~1–10 Mbps (skin has ~17,000 mechanoreceptors in hand; up to 1 kHz sensitivity) ~1–10 Mbps (skin has ~17,000 mechanoreceptors in hand; up to 1 kHz sensitivity) ~1–10 Mbps (skin has ~17,000 mechanoreceptors in hand; up to 1 kHz sensitivity) ~5–20 ms (nerve conduction 30–70 m/s) ~5–20 ms (nerve conduction 30–70 m/s) ~5–20 ms (nerve conduction 30–70 m/s) Hard: tactile sensors exist, but resolution, softness, temperature, multi-modal feel are challenging Hard: tactile sensors exist, but resolution, softness, temperature, multi-modal feel are challenging Hard: tactile sensors exist, but resolution, softness, temperature, multi-modal feel are challenging High: critical for manipulation, VR/AR realism, prosthetics. High: critical for manipulation, VR/AR realism, prosthetics. High: critical for manipulation, VR/AR realism, prosthetics. Proprioception (body position, muscle/joint sense) ~100–1000 kbps (dozens of muscle spindles & Golgi organs firing continuously) ~10–50 ms Hard: requires motion capture, IMUs, EMG, complex fusion Very High: essential for embodiment, robotics teleop, XR presence. Proprioception (body position, muscle/joint sense) Proprioception (body position, muscle/joint sense) Proprioception (body position, muscle/joint sense) ~100–1000 kbps (dozens of muscle spindles & Golgi organs firing continuously) ~100–1000 kbps (dozens of muscle spindles & Golgi organs firing continuously) ~100–1000 kbps (dozens of muscle spindles & Golgi organs firing continuously) ~10–50 ms ~10–50 ms ~10–50 ms Hard: requires motion capture, IMUs, EMG, complex fusion Hard: requires motion capture, IMUs, EMG, complex fusion Hard: requires motion capture, IMUs, EMG, complex fusion Very High: essential for embodiment, robotics teleop, XR presence. Very High: essential for embodiment, robotics teleop, XR presence. Very High: essential for embodiment, robotics teleop, XR presence. Vestibular (balance, acceleration, rotation) ~10–100 kbps (3 semicircular canals + 2 otolith organs) ~5–10 ms (extremely fast reflex loop for balance) Hard: gyros/accelerometers replicate linear/angular acceleration, but inducing realistic vestibular feedback is very hard Medium–High: important for XR realism; mismatch causes motion sickness. Vestibular (balance, acceleration, rotation) Vestibular (balance, acceleration, rotation) Vestibular (balance, acceleration, rotation) ~10–100 kbps (3 semicircular canals + 2 otolith organs) ~10–100 kbps (3 semicircular canals + 2 otolith organs) ~10–100 kbps (3 semicircular canals + 2 otolith organs) ~5–10 ms (extremely fast reflex loop for balance) ~5–10 ms (extremely fast reflex loop for balance) ~5–10 ms (extremely fast reflex loop for balance) Hard: gyros/accelerometers replicate linear/angular acceleration, but inducing realistic vestibular feedback is very hard Hard: gyros/accelerometers replicate linear/angular acceleration, but inducing realistic vestibular feedback is very hard Hard: gyros/accelerometers replicate linear/angular acceleration, but inducing realistic vestibular feedback is very hard Medium–High: important for XR realism; mismatch causes motion sickness. Medium–High: important for XR realism; mismatch causes motion sickness. Medium–High: important for XR realism; mismatch causes motion sickness. Smell (Olfaction) ~1–10 bps (≈ 400 receptor types, slow temporal coding) ~400–600 ms (perceptual lag) Very Hard: requires chemical sensing or odor synthesis, limited replicability Low–Medium: niche (immersive VR, food, medical diagnostics). Smell (Olfaction) Smell (Olfaction) Smell (Olfaction) ~1–10 bps (≈ 400 receptor types, slow temporal coding) ~1–10 bps (≈ 400 receptor types, slow temporal coding) ~1–10 bps (≈ 400 receptor types, slow temporal coding) ~400–600 ms (perceptual lag) ~400–600 ms (perceptual lag) ~400–600 ms (perceptual lag) Very Hard: requires chemical sensing or odor synthesis, limited replicability Very Hard: requires chemical sensing or odor synthesis, limited replicability Very Hard: requires chemical sensing or odor synthesis, limited replicability Low–Medium: niche (immersive VR, food, medical diagnostics). Low–Medium: niche (immersive VR, food, medical diagnostics). Low–Medium: niche (immersive VR, food, medical diagnostics). Taste (Gustation) ~1–10 bps (5 receptor types, slow integration) ~500–1000 ms Very Hard: chemical stimulation only, few practical electronic taste displays Low: niche (culinary VR, medical). Taste (Gustation) Taste (Gustation) Taste (Gustation) ~1–10 bps (5 receptor types, slow integration) ~1–10 bps (5 receptor types, slow integration) ~1–10 bps (5 receptor types, slow integration) ~500–1000 ms ~500–1000 ms ~500–1000 ms Very Hard: chemical stimulation only, few practical electronic taste displays Very Hard: chemical stimulation only, few practical electronic taste displays Very Hard: chemical stimulation only, few practical electronic taste displays Low: niche (culinary VR, medical). Low: niche (culinary VR, medical). Low: niche (culinary VR, medical). Interoception (internal state: hunger, heartbeat, breath, gut signals) Low bandwidth (<1 bps conscious; autonomic streams richer but subconscious) Seconds–minutes Very Hard: bio-signals accessible via ECG, PPG, hormone sensors, but incomplete Medium: useful for health-aware HMIs, adaptive AI. Interoception (internal state: hunger, heartbeat, breath, gut signals) Interoception (internal state: hunger, heartbeat, breath, gut signals) Interoception (internal state: hunger, heartbeat, breath, gut signals) Low bandwidth (<1 bps conscious; autonomic streams richer but subconscious) Low bandwidth (<1 bps conscious; autonomic streams richer but subconscious) Low bandwidth (<1 bps conscious; autonomic streams richer but subconscious) Seconds–minutes Seconds–minutes Seconds–minutes Very Hard: bio-signals accessible via ECG, PPG, hormone sensors, but incomplete Very Hard: bio-signals accessible via ECG, PPG, hormone sensors, but incomplete Very Hard: bio-signals accessible via ECG, PPG, hormone sensors, but incomplete Medium: useful for health-aware HMIs, adaptive AI. Medium: useful for health-aware HMIs, adaptive AI. Medium: useful for health-aware HMIs, adaptive AI. Thermoception (temperature) ~1–10 kbps ~50–200 ms Medium–Hard: thermal actuators exist, but slow response & safety constraints Medium: enhances immersion, but not primary channel. Thermoception (temperature) Thermoception (temperature) Thermoception (temperature) ~1–10 kbps ~1–10 kbps ~1–10 kbps ~50–200 ms ~50–200 ms ~50–200 ms Medium–Hard: thermal actuators exist, but slow response & safety constraints Medium–Hard: thermal actuators exist, but slow response & safety constraints Medium–Hard: thermal actuators exist, but slow response & safety constraints Medium: enhances immersion, but not primary channel. Medium: enhances immersion, but not primary channel. Medium: enhances immersion, but not primary channel. Nociception (pain) Not a “data channel” but a strong aversive signal ~100–300 ms Not desirable: pain induction ethically problematic Low: only as safety feedback in prosthetics. Nociception (pain) Nociception (pain) Nociception (pain) Not a “data channel” but a strong aversive signal Not a “data channel” but a strong aversive signal Not a “data channel” but a strong aversive signal ~100–300 ms ~100–300 ms ~100–300 ms Not desirable: pain induction ethically problematic Not desirable: pain induction ethically problematic Not desirable: pain induction ethically problematic Low: only as safety feedback in prosthetics. Low: only as safety feedback in prosthetics. Low: only as safety feedback in prosthetics. Key Observations Vision dominates bandwidth — orders of magnitude higher than other senses, but also easiest to overload (cognitive bottleneck at ~40–60 bps for conscious reading/listening). Latency matters differently: vestibular & proprioception are fast reflexive senses — latency below ~20 ms is essential, otherwise motion sickness / disembodiment occurs. Vision tolerates 50–100 ms in UX. Electronic acquisition: Vision dominates bandwidth — orders of magnitude higher than other senses, but also easiest to overload (cognitive bottleneck at ~40–60 bps for conscious reading/listening). Vision dominates bandwidth Latency matters differently: vestibular & proprioception are fast reflexive senses — latency below ~20 ms is essential, otherwise motion sickness / disembodiment occurs. Vision tolerates 50–100 ms in UX. Latency matters differently fast reflexive senses Electronic acquisition: Electronic acquisition Easy: vision (cameras), hearing (mics). Medium: touch (arrays of pressure sensors, haptic actuators). Hard: vestibular (feedback impossible without invasive or rotating rigs), proprioception (requires multimodal sensing), smell/taste (chemical). Easy: vision (cameras), hearing (mics). Easy Medium: touch (arrays of pressure sensors, haptic actuators). Medium Hard: vestibular (feedback impossible without invasive or rotating rigs), proprioception (requires multimodal sensing), smell/taste (chemical). Hard Importance for HMI: Importance for HMI: Importance for HMI Core: Vision, Hearing, Touch, Proprioception, Vestibular. Niche / emerging: Smell, Taste, Interoception, Thermoception. Critical distinction: input vs output — we can sense vision & hearing easily, but delivering feedback in touch/haptics & vestibular is much harder. Core: Vision, Hearing, Touch, Proprioception, Vestibular. Vision, Hearing, Touch, Proprioception, Vestibular Niche / emerging: Smell, Taste, Interoception, Thermoception. Smell, Taste, Interoception, Thermoception Critical distinction: input vs output — we can sense vision & hearing easily, but delivering feedback in touch/haptics & vestibular is much harder. input vs output sense delivering feedback HMI Sensorium Radar Vision dominates in bandwidth & importance, with medium acquisition difficulty. Hearing offers excellent latency and easy acquisition. Touch + Proprioception have high importance but are technically hard to digitize. Vestibular scores high on latency sensitivity but is very difficult to reproduce electronically. Smell & Taste sit at the low-bandwidth, high-difficulty, low-importance corner (niche). Interoception & Thermoception fall in between — valuable mainly for health or immersive feedback. Vision dominates in bandwidth & importance, with medium acquisition difficulty. Vision Hearing offers excellent latency and easy acquisition. Hearing Touch + Proprioception have high importance but are technically hard to digitize. Touch + Proprioception Vestibular scores high on latency sensitivity but is very difficult to reproduce electronically. Vestibular Smell & Taste sit at the low-bandwidth, high-difficulty, low-importance corner (niche). Smell & Taste Interoception & Thermoception fall in between — valuable mainly for health or immersive feedback. Interoception & Thermoception HMI opportunity map Implications for AI-HMI Design AI Interfaces Today (short-term): vision + hearing dominate (AR glasses, voice agents), but gesture, touch, micro-movements are the new frontier. Near-term Breakthroughs: haptics (Afference neural haptics, HaptX gloves), silent-speech (AlterEgo), proprioception mapping (IMU + EMG), vestibular tricks (electro-stimulation). Far-term: smell/taste/interoception → highly niche but can create hyper-immersive XR or health-aware AI companions. Bottleneck: humans can’t consciously process anywhere near the sensory raw bandwidth — HMI design must compress to what’s useful, intuitive, and low-latency. AI Interfaces Today (short-term): vision + hearing dominate (AR glasses, voice agents), but gesture, touch, micro-movements are the new frontier. AI Interfaces Today (short-term) Near-term Breakthroughs: haptics (Afference neural haptics, HaptX gloves), silent-speech (AlterEgo), proprioception mapping (IMU + EMG), vestibular tricks (electro-stimulation). Near-term Breakthroughs Far-term: smell/taste/interoception → highly niche but can create hyper-immersive XR or health-aware AI companions. Far-term Bottleneck: humans can’t consciously process anywhere near the sensory raw bandwidth — HMI design must compress to what’s useful, intuitive, and low-latency. Bottleneck HMI Opportunity Map Bottom-left (Vision, Hearing) → high bandwidth, low acquisition difficulty → already well-covered, but incremental AI/UX improvements matter. Top-right (Vestibular, Proprioception, Touch) → high bandwidth/importance but hard to acquire electronically → biggest innovation opportunities. Smell & Taste → low bandwidth, very hard, low importance → niche applications only. Interoception & Thermoception → moderate niche, especially for health-aware or immersive HMIs. Bottom-left (Vision, Hearing) → high bandwidth, low acquisition difficulty → already well-covered, but incremental AI/UX improvements matter. Bottom-left (Vision, Hearing) Top-right (Vestibular, Proprioception, Touch) → high bandwidth/importance but hard to acquire electronically → biggest innovation opportunities. Top-right (Vestibular, Proprioception, Touch) hard to acquire electronically innovation opportunities Smell & Taste → low bandwidth, very hard, low importance → niche applications only. Smell & Taste Interoception & Thermoception → moderate niche, especially for health-aware or immersive HMIs. Interoception & Thermoception health-aware or immersive HMIs The “sweet spot” for future startups lies in making hard-to-digitize senses (touch, balance, body sense) usable for AI interfaces — biggest gap between potential value an`d current tech maturity. making hard-to-digitize senses (touch, balance, body sense) usable for AI interfaces Biggest under-served opportunities for HMI innovation: I’ve ranked the senses by Innovation Gap (the difference between their theoretical potential and today’s opportunity score). Innovation Gap Vision — already dominant but still leaves the largest gap (AI-driven compression, semantics, and augmentation). Proprioception — huge potential but very hard to capture; unlocking it could transform XR/robotics. Touch — high payoff if electronic haptics and tactile sensing improve. Hearing — strong today but still under-optimized (spatial, multimodal, selective hearing AI). Vestibular — critical for immersion, but remains technically difficult. Vision — already dominant but still leaves the largest gap (AI-driven compression, semantics, and augmentation). Vision Proprioception — huge potential but very hard to capture; unlocking it could transform XR/robotics. Proprioception Touch — high payoff if electronic haptics and tactile sensing improve. Touch Hearing — strong today but still under-optimized (spatial, multimodal, selective hearing AI). Hearing Vestibular — critical for immersion, but remains technically difficult. Vestibular 40 promising HMI startups to watch Here’s a curated, up-to-date landscape of AI-hardware startups building new human–machine interfaces (HMI) for the AI age. I grouped them by interface modality and flagged the form factor, what’s new, and stage. I focused on 2024–2025 developments and included links/citations so you can dig deeper fast. new human–machine interfaces form factor, what’s new, and stage First let’s have a look at the AI HMI startup landscape in terms of bandwidth vs acquisition difficulty: 1) Silent-speech, neural/nerve & micro-gesture input (non-invasive) Startup Modality & Form Factor What’s new / why it matters Stage / Notes AlterEgo Sub-vocal “silent speech” via cranial/neuromuscular signals; over-ear/behind-head wearable Public debut of Silent Sense for silent dictation & AI querying at “thought-speed”; demos show silent two-way comms & device control. (Axios) Newly out of stealth; product details pending. Augmental (MouthPad^ ) Tongue + head-gesture in-mouth touchpad (roof of mouth) Hands-free cursor/clicks; active roadmap on head-tracking & silent-speech; raised seed in late 2023. (MIT News) Shipping to early users; assistive & creator workflows. Wearable Devices (Mudra Band / Mudra Link) Neural/EMG-like wristbands (Apple Watch band + cross-platform Link) CES 2025 Innovation Award; Link opens OS-agnostic neural input; dev kit & distribution deals. (CES) Public company (WLDS); consumer + XR partners. Doublepoint Micro-gesture recognition from watches/wristbands WowMouse turns Apple Watch into spatial mouse; eye-tracking + pinch “look-then-tap” UX. (TechCrunch) App live; SDK for OEMs & XR makers. Wisear Neural interface in earbuds (jaw/eye micro-movements; roadmap to neural) “Neural clicks” for XR/earbuds; first Wisearphones planned; licensing to OEMs. (wisear.io) Late-stage prototypes; announced timelines & pilots. Afference (Phantom / Ring) Neural haptics (output!) via fingertip rings/glove stimulating nerves CES award-winner; creates artificial touch without bulky gloves; neural haptics reference ring. (Interesting Engineering) Early funding; working with XR & research labs. Startup Modality & Form Factor What’s new / why it matters Stage / Notes AlterEgo Sub-vocal “silent speech” via cranial/neuromuscular signals; over-ear/behind-head wearable Public debut of Silent Sense for silent dictation & AI querying at “thought-speed”; demos show silent two-way comms & device control. (Axios) Newly out of stealth; product details pending. Augmental (MouthPad^ ) Tongue + head-gesture in-mouth touchpad (roof of mouth) Hands-free cursor/clicks; active roadmap on head-tracking & silent-speech; raised seed in late 2023. (MIT News) Shipping to early users; assistive & creator workflows. Wearable Devices (Mudra Band / Mudra Link) Neural/EMG-like wristbands (Apple Watch band + cross-platform Link) CES 2025 Innovation Award; Link opens OS-agnostic neural input; dev kit & distribution deals. (CES) Public company (WLDS); consumer + XR partners. Doublepoint Micro-gesture recognition from watches/wristbands WowMouse turns Apple Watch into spatial mouse; eye-tracking + pinch “look-then-tap” UX. (TechCrunch) App live; SDK for OEMs & XR makers. Wisear Neural interface in earbuds (jaw/eye micro-movements; roadmap to neural) “Neural clicks” for XR/earbuds; first Wisearphones planned; licensing to OEMs. (wisear.io) Late-stage prototypes; announced timelines & pilots. Afference (Phantom / Ring) Neural haptics (output!) via fingertip rings/glove stimulating nerves CES award-winner; creates artificial touch without bulky gloves; neural haptics reference ring. (Interesting Engineering) Early funding; working with XR & research labs. Startup Modality & Form Factor What’s new / why it matters Stage / Notes Startup Startup Startup Modality & Form Factor Modality & Form Factor Modality & Form Factor What’s new / why it matters What’s new / why it matters What’s new / why it matters Stage / Notes Stage / Notes Stage / Notes AlterEgo Sub-vocal “silent speech” via cranial/neuromuscular signals; over-ear/behind-head wearable Public debut of Silent Sense for silent dictation & AI querying at “thought-speed”; demos show silent two-way comms & device control. (Axios) Newly out of stealth; product details pending. AlterEgo AlterEgo AlterEgo Sub-vocal “silent speech” via cranial/neuromuscular signals; over-ear/behind-head wearable Sub-vocal “silent speech” via cranial/neuromuscular signals; over-ear/behind-head wearable Sub-vocal “silent speech” via cranial/neuromuscular signals; over-ear/behind-head wearable Public debut of Silent Sense for silent dictation & AI querying at “thought-speed”; demos show silent two-way comms & device control. (Axios) Public debut of Silent Sense for silent dictation & AI querying at “thought-speed”; demos show silent two-way comms & device control. (Axios) Public debut of Silent Sense for silent dictation & AI querying at “thought-speed”; demos show silent two-way comms & device control. (Axios) Silent Sense Axios Axios Newly out of stealth; product details pending. Newly out of stealth; product details pending. Newly out of stealth; product details pending. Augmental (MouthPad^ ) Tongue + head-gesture in-mouth touchpad (roof of mouth) Hands-free cursor/clicks; active roadmap on head-tracking & silent-speech; raised seed in late 2023. (MIT News) Shipping to early users; assistive & creator workflows. Augmental (MouthPad^ ) Augmental (MouthPad^ ) Augmental (MouthPad^ ) Tongue + head-gesture in-mouth touchpad (roof of mouth) Tongue + head-gesture in-mouth touchpad (roof of mouth) Tongue + head-gesture in-mouth touchpad (roof of mouth) in-mouth Hands-free cursor/clicks; active roadmap on head-tracking & silent-speech; raised seed in late 2023. (MIT News) Hands-free cursor/clicks; active roadmap on head-tracking & silent-speech; raised seed in late 2023. (MIT News) Hands-free cursor/clicks; active roadmap on head-tracking & silent-speech; raised seed in late 2023. (MIT News) MIT News MIT News Shipping to early users; assistive & creator workflows. Shipping to early users; assistive & creator workflows. Shipping to early users; assistive & creator workflows. Wearable Devices (Mudra Band / Mudra Link) Neural/EMG-like wristbands (Apple Watch band + cross-platform Link) CES 2025 Innovation Award; Link opens OS-agnostic neural input; dev kit & distribution deals. (CES) Public company (WLDS); consumer + XR partners. Wearable Devices (Mudra Band / Mudra Link) Wearable Devices (Mudra Band / Mudra Link) Wearable Devices (Mudra Band / Mudra Link) Neural/EMG-like wristbands (Apple Watch band + cross-platform Link) Neural/EMG-like wristbands (Apple Watch band + cross-platform Link) Neural/EMG-like wristbands (Apple Watch band + cross-platform Link) CES 2025 Innovation Award; Link opens OS-agnostic neural input; dev kit & distribution deals. (CES) CES 2025 Innovation Award; Link opens OS-agnostic neural input; dev kit & distribution deals. (CES) CES 2025 Innovation Award; Link opens OS-agnostic neural input; dev kit & distribution deals. (CES) CES CES Public company (WLDS); consumer + XR partners. Public company (WLDS); consumer + XR partners. Public company (WLDS); consumer + XR partners. Doublepoint Micro-gesture recognition from watches/wristbands WowMouse turns Apple Watch into spatial mouse; eye-tracking + pinch “look-then-tap” UX. (TechCrunch) App live; SDK for OEMs & XR makers. Doublepoint Doublepoint Doublepoint Micro-gesture recognition from watches/wristbands Micro-gesture recognition from watches/wristbands Micro-gesture recognition from watches/wristbands WowMouse turns Apple Watch into spatial mouse; eye-tracking + pinch “look-then-tap” UX. (TechCrunch) WowMouse turns Apple Watch into spatial mouse; eye-tracking + pinch “look-then-tap” UX. (TechCrunch) WowMouse turns Apple Watch into spatial mouse; eye-tracking + pinch “look-then-tap” UX. (TechCrunch) WowMouse TechCrunch TechCrunch App live; SDK for OEMs & XR makers. App live; SDK for OEMs & XR makers. App live; SDK for OEMs & XR makers. Wisear Neural interface in earbuds (jaw/eye micro-movements; roadmap to neural) “Neural clicks” for XR/earbuds; first Wisearphones planned; licensing to OEMs. (wisear.io) Late-stage prototypes; announced timelines & pilots. Wisear Wisear Wisear Neural interface in earbuds (jaw/eye micro-movements; roadmap to neural) Neural interface in earbuds (jaw/eye micro-movements; roadmap to neural) Neural interface in earbuds (jaw/eye micro-movements; roadmap to neural) “Neural clicks” for XR/earbuds; first Wisearphones planned; licensing to OEMs. (wisear.io) “Neural clicks” for XR/earbuds; first Wisearphones planned; licensing to OEMs. (wisear.io) “Neural clicks” for XR/earbuds; first Wisearphones planned; licensing to OEMs. (wisear.io) wisear.io wisear.io Late-stage prototypes; announced timelines & pilots. Late-stage prototypes; announced timelines & pilots. Late-stage prototypes; announced timelines & pilots. Afference (Phantom / Ring) Neural haptics (output!) via fingertip rings/glove stimulating nerves CES award-winner; creates artificial touch without bulky gloves; neural haptics reference ring. (Interesting Engineering) Early funding; working with XR & research labs. Afference (Phantom / Ring) Afference (Phantom / Ring) Afference (Phantom / Ring) Neural haptics (output!) via fingertip rings/glove stimulating nerves Neural haptics (output!) via fingertip rings/glove stimulating nerves Neural haptics (output!) via fingertip rings/glove stimulating nerves CES award-winner; creates artificial touch without bulky gloves; neural haptics reference ring. (Interesting Engineering) CES award-winner; creates artificial touch without bulky gloves; neural haptics reference ring. (Interesting Engineering) CES award-winner; creates artificial touch without bulky gloves; neural haptics reference ring. (Interesting Engineering) Interesting Engineering Interesting Engineering Early funding; working with XR & research labs. Early funding; working with XR & research labs. Early funding; working with XR & research labs. 2) Non-invasive neurotech / everyday BCI wearables Startup Modality & Form Factor What’s new / why it matters Stage / Notes Neurable EEG + AI in headphones (MW75 Neuro line) Commercial “brain-tracking” ANC headphones measuring focus; productivity & health insights. (Master & Dynamic) Shipping (US); scaling to EU/UK. OpenBCI (Galea, cEEGrid, Ultracortex) Research-grade biosensing headsets; around-ear EEG kits Galea (EEG/EOG/EMG/EDA) integrates with XR; dev kits for labs & startups. (OpenBCI Shop) Hardware available; strong dev ecosystem. EMOTIV EEG headsets & MN8 EEG earbuds Newer consumer & research lines (Insight/EPOC X; MN8 earbuds) used in UX, wellness, research. (EMOTIV) ** Mature startup; DTC + enterprise. InteraXon (Muse) EEG headbands; new Muse S “Athena” EEG+fNIRS Adds fNIRS to consumer headband → better focus/sleep metrics & neurofeedback. (Muse: the brain sensing headband) Shipping; wellness & performance verticals. Cognixion (ONE) Non-invasive BCI + AR speech headset Uses BCI with flashing visual patterns + AI to speak/control smart home; ALS use-cases. (Cognixion) Assistive comms pilots; clinical focus. MindPortal fNIRS-based “telepathic AI” headphones (R&D) Targeting thought-to-AI interfaces with non-invasive optical signals. (mindportal.com) Early stage; dev previews & interviews. NexStem EEG headsets + SDK Low-cost BCI kits for devs & research; HMI demos. (nexstem.ai) Developer community growing. Raised a seed round in April 2025 Startup Modality & Form Factor What’s new / why it matters Stage / Notes Neurable EEG + AI in headphones (MW75 Neuro line) Commercial “brain-tracking” ANC headphones measuring focus; productivity & health insights. (Master & Dynamic) Shipping (US); scaling to EU/UK. OpenBCI (Galea, cEEGrid, Ultracortex) Research-grade biosensing headsets; around-ear EEG kits Galea (EEG/EOG/EMG/EDA) integrates with XR; dev kits for labs & startups. (OpenBCI Shop) Hardware available; strong dev ecosystem. EMOTIV EEG headsets & MN8 EEG earbuds Newer consumer & research lines (Insight/EPOC X; MN8 earbuds) used in UX, wellness, research. (EMOTIV) ** Mature startup; DTC + enterprise. InteraXon (Muse) EEG headbands; new Muse S “Athena” EEG+fNIRS Adds fNIRS to consumer headband → better focus/sleep metrics & neurofeedback. (Muse: the brain sensing headband) Shipping; wellness & performance verticals. Cognixion (ONE) Non-invasive BCI + AR speech headset Uses BCI with flashing visual patterns + AI to speak/control smart home; ALS use-cases. (Cognixion) Assistive comms pilots; clinical focus. MindPortal fNIRS-based “telepathic AI” headphones (R&D) Targeting thought-to-AI interfaces with non-invasive optical signals. (mindportal.com) Early stage; dev previews & interviews. NexStem EEG headsets + SDK Low-cost BCI kits for devs & research; HMI demos. (nexstem.ai) Developer community growing. Raised a seed round in April 2025 Startup Modality & Form Factor What’s new / why it matters Stage / Notes Startup Startup Startup Modality & Form Factor Modality & Form Factor Modality & Form Factor What’s new / why it matters What’s new / why it matters What’s new / why it matters Stage / Notes Stage / Notes Stage / Notes Neurable EEG + AI in headphones (MW75 Neuro line) Commercial “brain-tracking” ANC headphones measuring focus; productivity & health insights. (Master & Dynamic) Shipping (US); scaling to EU/UK. Neurable Neurable Neurable EEG + AI in headphones (MW75 Neuro line) EEG + AI in headphones (MW75 Neuro line) EEG + AI in headphones (MW75 Neuro line) Commercial “brain-tracking” ANC headphones measuring focus; productivity & health insights. (Master & Dynamic) Commercial “brain-tracking” ANC headphones measuring focus; productivity & health insights. (Master & Dynamic) Commercial “brain-tracking” ANC headphones measuring focus; productivity & health insights. (Master & Dynamic) Master & Dynamic Master & Dynamic Shipping (US); scaling to EU/UK. Shipping (US); scaling to EU/UK. Shipping (US); scaling to EU/UK. OpenBCI (Galea, cEEGrid, Ultracortex) Research-grade biosensing headsets; around-ear EEG kits Galea (EEG/EOG/EMG/EDA) integrates with XR; dev kits for labs & startups. (OpenBCI Shop) Hardware available; strong dev ecosystem. OpenBCI (Galea, cEEGrid, Ultracortex) OpenBCI (Galea, cEEGrid, Ultracortex) OpenBCI (Galea, cEEGrid, Ultracortex) Research-grade biosensing headsets; around-ear EEG kits Research-grade biosensing headsets; around-ear EEG kits Research-grade biosensing headsets; around-ear EEG kits Galea (EEG/EOG/EMG/EDA) integrates with XR; dev kits for labs & startups. (OpenBCI Shop) Galea (EEG/EOG/EMG/EDA) integrates with XR; dev kits for labs & startups. (OpenBCI Shop) Galea (EEG/EOG/EMG/EDA) integrates with XR; dev kits for labs & startups. (OpenBCI Shop) OpenBCI Shop OpenBCI Shop Hardware available; strong dev ecosystem. Hardware available; strong dev ecosystem. Hardware available; strong dev ecosystem. EMOTIV EEG headsets & MN8 EEG earbuds Newer consumer & research lines (Insight/EPOC X; MN8 earbuds) used in UX, wellness, research. (EMOTIV) EMOTIV EMOTIV EMOTIV EEG headsets & MN8 EEG earbuds EEG headsets & MN8 EEG earbuds EEG headsets & MN8 EEG earbuds Newer consumer & research lines (Insight/EPOC X; MN8 earbuds) used in UX, wellness, research. (EMOTIV) Newer consumer & research lines (Insight/EPOC X; MN8 earbuds) used in UX, wellness, research. (EMOTIV) Newer consumer & research lines (Insight/EPOC X; MN8 earbuds) used in UX, wellness, research. (EMOTIV) EMOTIV EMOTIV ** Mature startup; DTC + enterprise. ** ** Mature startup; DTC + enterprise. Mature startup; DTC + enterprise. Mature startup; DTC + enterprise. InteraXon (Muse) EEG headbands; new Muse S “Athena” EEG+fNIRS Adds fNIRS to consumer headband → better focus/sleep metrics & neurofeedback. (Muse: the brain sensing headband) Shipping; wellness & performance verticals. InteraXon (Muse) InteraXon (Muse) InteraXon (Muse) EEG headbands; new Muse S “Athena” EEG+fNIRS EEG headbands; new Muse S “Athena” EEG+fNIRS EEG headbands; new Muse S “Athena” EEG+fNIRS Adds fNIRS to consumer headband → better focus/sleep metrics & neurofeedback. (Muse: the brain sensing headband) Adds fNIRS to consumer headband → better focus/sleep metrics & neurofeedback. (Muse: the brain sensing headband) Adds fNIRS to consumer headband → better focus/sleep metrics & neurofeedback. (Muse: the brain sensing headband) Muse: the brain sensing headband Muse: the brain sensing headband Shipping; wellness & performance verticals. Shipping; wellness & performance verticals. Shipping; wellness & performance verticals. Cognixion (ONE) Non-invasive BCI + AR speech headset Uses BCI with flashing visual patterns + AI to speak/control smart home; ALS use-cases. (Cognixion) Assistive comms pilots; clinical focus. Cognixion (ONE) Cognixion (ONE) Cognixion (ONE) Non-invasive BCI + AR speech headset Non-invasive BCI + AR speech headset Non-invasive BCI + AR speech headset Uses BCI with flashing visual patterns + AI to speak/control smart home; ALS use-cases. (Cognixion) Uses BCI with flashing visual patterns + AI to speak/control smart home; ALS use-cases. (Cognixion) Uses BCI with flashing visual patterns + AI to speak/control smart home; ALS use-cases. (Cognixion) Cognixion Cognixion Assistive comms pilots; clinical focus. Assistive comms pilots; clinical focus. Assistive comms pilots; clinical focus. MindPortal fNIRS-based “telepathic AI” headphones (R&D) Targeting thought-to-AI interfaces with non-invasive optical signals. (mindportal.com) Early stage; dev previews & interviews. MindPortal MindPortal MindPortal fNIRS-based “telepathic AI” headphones (R&D) fNIRS-based “telepathic AI” headphones (R&D) fNIRS-based “telepathic AI” headphones (R&D) Targeting thought-to-AI interfaces with non-invasive optical signals. (mindportal.com) Targeting thought-to-AI interfaces with non-invasive optical signals. (mindportal.com) Targeting thought-to-AI interfaces with non-invasive optical signals. (mindportal.com) mindportal.com mindportal.com Early stage; dev previews & interviews. Early stage; dev previews & interviews. Early stage; dev previews & interviews. NexStem EEG headsets + SDK Low-cost BCI kits for devs & research; HMI demos. (nexstem.ai) Developer community growing. Raised a seed round in April 2025 NexStem NexStem NexStem EEG headsets + SDK EEG headsets + SDK EEG headsets + SDK Low-cost BCI kits for devs & research; HMI demos. (nexstem.ai) Low-cost BCI kits for devs & research; HMI demos. (nexstem.ai) Low-cost BCI kits for devs & research; HMI demos. (nexstem.ai) nexstem.ai nexstem.ai Developer community growing. Raised a seed round in April 2025 Developer community growing. Raised a seed round in April 2025 Developer community growing. Raised a seed round in April 2025 3) Minimally-invasive & invasive BCI (clinical first, consumer later) Startup Modality & Form Factor What’s new / why it matters Stage / Notes Synchron Endovascular stentrode (via blood vessel → motor cortex) Pairing with NVIDIA? AI to improve decoding; ALS users controlling home devices. (WIRED) Human trials; lower surgical burden vs open-brain. Precision Neuroscience Thin-film cortical surface array (~1024 electrodes) “Layer 7” interface sits on cortex w/o penetrating; speech/motor decoding. (WIRED) The company received FDA clearance for the device and has implanted it in 37 patients as of April 2025 Paradromics High-bandwidth implant (“Connexus”) First human test (May 14, 2025); compact 420-electrode array aimed at speech/typing. (WIRED) Moving toward long-term trials. Neuralink Penetrating micro-electrode implant + robot surgery Large funding; parallel human trials race; long-horizon consumer HMI. (Bioworld) Clinical; significant visibility. Blackrock Neurotech Utah-array implants & ecosystems Deep install base in research/clinical BCI. (Tracxn) Clinical research leader. acquired by Tether in April 2024 Startup Modality & Form Factor What’s new / why it matters Stage / Notes Synchron Endovascular stentrode (via blood vessel → motor cortex) Pairing with NVIDIA? AI to improve decoding; ALS users controlling home devices. (WIRED) Human trials; lower surgical burden vs open-brain. Precision Neuroscience Thin-film cortical surface array (~1024 electrodes) “Layer 7” interface sits on cortex w/o penetrating; speech/motor decoding. (WIRED) The company received FDA clearance for the device and has implanted it in 37 patients as of April 2025 Paradromics High-bandwidth implant (“Connexus”) First human test (May 14, 2025); compact 420-electrode array aimed at speech/typing. (WIRED) Moving toward long-term trials. Neuralink Penetrating micro-electrode implant + robot surgery Large funding; parallel human trials race; long-horizon consumer HMI. (Bioworld) Clinical; significant visibility. Blackrock Neurotech Utah-array implants & ecosystems Deep install base in research/clinical BCI. (Tracxn) Clinical research leader. acquired by Tether in April 2024 Startup Modality & Form Factor What’s new / why it matters Stage / Notes Startup Startup Startup Modality & Form Factor Modality & Form Factor Modality & Form Factor What’s new / why it matters What’s new / why it matters What’s new / why it matters Stage / Notes Stage / Notes Stage / Notes Synchron Endovascular stentrode (via blood vessel → motor cortex) Pairing with NVIDIA? AI to improve decoding; ALS users controlling home devices. (WIRED) Human trials; lower surgical burden vs open-brain. Synchron Synchron Synchron Endovascular stentrode (via blood vessel → motor cortex) Endovascular stentrode (via blood vessel → motor cortex) Endovascular stentrode (via blood vessel → motor cortex) Pairing with NVIDIA? AI to improve decoding; ALS users controlling home devices. (WIRED) Pairing with NVIDIA? AI to improve decoding; ALS users controlling home devices. (WIRED) Pairing with NVIDIA? AI to improve decoding; ALS users controlling home devices. (WIRED) WIRED WIRED Human trials; lower surgical burden vs open-brain. Human trials; lower surgical burden vs open-brain. Human trials; lower surgical burden vs open-brain. Precision Neuroscience Thin-film cortical surface array (~1024 electrodes) “Layer 7” interface sits on cortex w/o penetrating; speech/motor decoding. (WIRED) The company received FDA clearance for the device and has implanted it in 37 patients as of April 2025 Precision Neuroscience Precision Neuroscience Precision Neuroscience Thin-film cortical surface array (~1024 electrodes) Thin-film cortical surface array (~1024 electrodes) Thin-film cortical surface array (~1024 electrodes) “Layer 7” interface sits on cortex w/o penetrating; speech/motor decoding. (WIRED) “Layer 7” interface sits on cortex w/o penetrating; speech/motor decoding. (WIRED) “Layer 7” interface sits on cortex w/o penetrating; speech/motor decoding. (WIRED) WIRED WIRED The company received FDA clearance for the device and has implanted it in 37 patients as of April 2025 The company received FDA clearance for the device and has implanted it in 37 patients as of April 2025 The company received FDA clearance for the device and has implanted it in 37 patients as of April 2025 Paradromics High-bandwidth implant (“Connexus”) First human test (May 14, 2025); compact 420-electrode array aimed at speech/typing. (WIRED) Moving toward long-term trials. Paradromics Paradromics Paradromics High-bandwidth implant (“Connexus”) High-bandwidth implant (“Connexus”) High-bandwidth implant (“Connexus”) First human test (May 14, 2025); compact 420-electrode array aimed at speech/typing. (WIRED) First human test (May 14, 2025); compact 420-electrode array aimed at speech/typing. (WIRED) First human test (May 14, 2025); compact 420-electrode array aimed at speech/typing. (WIRED) WIRED WIRED Moving toward long-term trials. Moving toward long-term trials. Moving toward long-term trials. Neuralink Penetrating micro-electrode implant + robot surgery Large funding; parallel human trials race; long-horizon consumer HMI. (Bioworld) Clinical; significant visibility. Neuralink Neuralink Neuralink Penetrating micro-electrode implant + robot surgery Penetrating micro-electrode implant + robot surgery Penetrating micro-electrode implant + robot surgery Large funding; parallel human trials race; long-horizon consumer HMI. (Bioworld) Large funding; parallel human trials race; long-horizon consumer HMI. (Bioworld) Large funding; parallel human trials race; long-horizon consumer HMI. (Bioworld) Bioworld Bioworld Clinical; significant visibility. Clinical; significant visibility. Clinical; significant visibility. Blackrock Neurotech Utah-array implants & ecosystems Deep install base in research/clinical BCI. (Tracxn) Clinical research leader. acquired by Tether in April 2024 Blackrock Neurotech Blackrock Neurotech Blackrock Neurotech Utah-array implants & ecosystems Utah-array implants & ecosystems Utah-array implants & ecosystems Deep install base in research/clinical BCI. (Tracxn) Deep install base in research/clinical BCI. (Tracxn) Deep install base in research/clinical BCI. (Tracxn) Tracxn Tracxn Clinical research leader. acquired by Tether in April 2024 Clinical research leader. acquired by Tether in April 2024 Clinical research leader. acquired by Tether in April 2024 4) AR glasses, AI wearables & spatial computers (new UX canvases) Startup Device What’s new / why it matters Stage / Notes Brilliant Labs (Frame/Halo) Open smart glasses + cloud AI agent Open hardware/software for devs; lightweight daily-use AR + AI. (Forbes) Shipping early units; active community. Rokid Light AR/AI glasses New glasses at IFA 2025: on-glasses AI, dual micro-LED displays; live translation, nav, GPT. (Tom's Guide) New model announced; consumer price point. Sightful (Spacetop) Screenless laptop + AR workspace Spacetop G1 (and Windows variant) → private, portable 100" desktop; AR productivity UX. (WIRED) Preorders / rolling availability. Limitless (Pendant) Wearable voice lifelogger + AI memory Records/organizes your day; context memory for assistant; Android app rolling out. (Limitless) Actively shipping units for iOS and has an Android app planned for late 2025. Rabbit (R1) Pocket AI device (LAM-driven) Major RabbitOS 2 UX overhaul; generative UI & new actions after rocky launch. (9to5Google) Over 130K devices shipped, but DAU hover around 5,000 as of August 2025 Humane (Ai Pin) Projector pin wearable Cautionary tale—service shutdown & HP acquisition (illustrates pitfalls of new AI UX). (WIRED) Humane ceased sales of the Ai Pin in February 2025 and sold most of its assets to HP. The service for the Ai Pin was also shut down. Startup Device What’s new / why it matters Stage / Notes Brilliant Labs (Frame/Halo) Open smart glasses + cloud AI agent Open hardware/software for devs; lightweight daily-use AR + AI. (Forbes) Shipping early units; active community. Rokid Light AR/AI glasses New glasses at IFA 2025: on-glasses AI, dual micro-LED displays; live translation, nav, GPT. (Tom's Guide) New model announced; consumer price point. Sightful (Spacetop) Screenless laptop + AR workspace Spacetop G1 (and Windows variant) → private, portable 100" desktop; AR productivity UX. (WIRED) Preorders / rolling availability. Limitless (Pendant) Wearable voice lifelogger + AI memory Records/organizes your day; context memory for assistant; Android app rolling out. (Limitless) Actively shipping units for iOS and has an Android app planned for late 2025. Rabbit (R1) Pocket AI device (LAM-driven) Major RabbitOS 2 UX overhaul; generative UI & new actions after rocky launch. (9to5Google) Over 130K devices shipped, but DAU hover around 5,000 as of August 2025 Humane (Ai Pin) Projector pin wearable Cautionary tale—service shutdown & HP acquisition (illustrates pitfalls of new AI UX). (WIRED) Humane ceased sales of the Ai Pin in February 2025 and sold most of its assets to HP. The service for the Ai Pin was also shut down. Startup Device What’s new / why it matters Stage / Notes Startup Startup Startup Device Device Device What’s new / why it matters What’s new / why it matters What’s new / why it matters Stage / Notes Stage / Notes Stage / Notes Brilliant Labs (Frame/Halo) Open smart glasses + cloud AI agent Open hardware/software for devs; lightweight daily-use AR + AI. (Forbes) Shipping early units; active community. Brilliant Labs (Frame/Halo) Brilliant Labs (Frame/Halo) Brilliant Labs (Frame/Halo) Open smart glasses + cloud AI agent Open smart glasses + cloud AI agent Open smart glasses + cloud AI agent Open hardware/software for devs; lightweight daily-use AR + AI. (Forbes) Open hardware/software for devs; lightweight daily-use AR + AI. (Forbes) Open hardware/software for devs; lightweight daily-use AR + AI. (Forbes) Forbes Forbes Shipping early units; active community. Shipping early units; active community. Shipping early units; active community. Rokid Light AR/AI glasses New glasses at IFA 2025: on-glasses AI, dual micro-LED displays; live translation, nav, GPT. (Tom's Guide) New model announced; consumer price point. Rokid Rokid Rokid Light AR/AI glasses Light AR/AI glasses Light AR/AI glasses New glasses at IFA 2025: on-glasses AI, dual micro-LED displays; live translation, nav, GPT. (Tom's Guide) New glasses at IFA 2025: on-glasses AI, dual micro-LED displays; live translation, nav, GPT. (Tom's Guide) New glasses at IFA 2025: on-glasses AI, dual micro-LED displays; live translation, nav, GPT. (Tom's Guide) Tom's Guide Tom's Guide New model announced; consumer price point. New model announced; consumer price point. New model announced; consumer price point. Sightful (Spacetop) Screenless laptop + AR workspace Spacetop G1 (and Windows variant) → private, portable 100" desktop; AR productivity UX. (WIRED) Preorders / rolling availability. Sightful (Spacetop) Sightful (Spacetop) Sightful (Spacetop) Screenless laptop + AR workspace Screenless laptop + AR workspace Screenless laptop + AR workspace Spacetop G1 (and Windows variant) → private, portable 100" desktop; AR productivity UX. (WIRED) Spacetop G1 (and Windows variant) → private, portable 100" desktop; AR productivity UX. (WIRED) Spacetop G1 (and Windows variant) → private, portable 100" desktop; AR productivity UX. (WIRED) WIRED WIRED Preorders / rolling availability. Preorders / rolling availability. Preorders / rolling availability. Limitless (Pendant) Wearable voice lifelogger + AI memory Records/organizes your day; context memory for assistant; Android app rolling out. (Limitless) Actively shipping units for iOS and has an Android app planned for late 2025. Limitless (Pendant) Limitless (Pendant) Limitless (Pendant) Wearable voice lifelogger + AI memory Wearable voice lifelogger + AI memory Wearable voice lifelogger + AI memory Records/organizes your day; context memory for assistant; Android app rolling out. (Limitless) Records/organizes your day; context memory for assistant; Android app rolling out. (Limitless) Records/organizes your day; context memory for assistant; Android app rolling out. (Limitless) Limitless Limitless Actively shipping units for iOS and has an Android app planned for late 2025. Actively shipping units for iOS and has an Android app planned for late 2025. Actively shipping units for iOS and has an Android app planned for late 2025. Rabbit (R1) Pocket AI device (LAM-driven) Major RabbitOS 2 UX overhaul; generative UI & new actions after rocky launch. (9to5Google) Over 130K devices shipped, but DAU hover around 5,000 as of August 2025 Rabbit (R1) Rabbit (R1) Rabbit (R1) Pocket AI device (LAM-driven) Pocket AI device (LAM-driven) Pocket AI device (LAM-driven) Major RabbitOS 2 UX overhaul; generative UI & new actions after rocky launch. (9to5Google) Major RabbitOS 2 UX overhaul; generative UI & new actions after rocky launch. (9to5Google) Major RabbitOS 2 UX overhaul; generative UI & new actions after rocky launch. (9to5Google) 9to5Google 9to5Google Over 130K devices shipped, but DAU hover around 5,000 as of August 2025 Over 130K devices shipped, but DAU hover around 5,000 as of August 2025 Over 130K devices shipped, but DAU hover around 5,000 as of August 2025 Humane (Ai Pin) Projector pin wearable Cautionary tale—service shutdown & HP acquisition (illustrates pitfalls of new AI UX). (WIRED) Humane ceased sales of the Ai Pin in February 2025 and sold most of its assets to HP. The service for the Ai Pin was also shut down. Humane (Ai Pin) Humane (Ai Pin) Humane (Ai Pin) Projector pin wearable Projector pin wearable Projector pin wearable Cautionary tale—service shutdown & HP acquisition (illustrates pitfalls of new AI UX). (WIRED) Cautionary tale—service shutdown & HP acquisition (illustrates pitfalls of new AI UX). (WIRED) Cautionary tale—service shutdown & HP acquisition (illustrates pitfalls of new AI UX). (WIRED) WIRED WIRED Humane ceased sales of the Ai Pin in February 2025 and sold most of its assets to HP. The service for the Ai Pin was also shut down. Humane ceased sales of the Ai Pin in February 2025 and sold most of its assets to HP. The service for the Ai Pin was also shut down. Humane ceased sales of the Ai Pin in February 2025 and sold most of its assets to HP. The service for the Ai Pin was also shut down. 5) Eye-, face- & driver-state sensing (affect-aware, context-aware UX) Startup Focus Why it matters Smart Eye (Affectiva/iMotions) Eye/face/driver monitoring & interior sensing Automotive-grade attention & affect → safety & adaptive interfaces. (UploadVR) uSens Gesture & 3D HCI tracking (AR/VR, auto) Vision-based hand/pose tracking at edge for XR & mobile. (UploadVR) Startup Focus Why it matters Smart Eye (Affectiva/iMotions) Eye/face/driver monitoring & interior sensing Automotive-grade attention & affect → safety & adaptive interfaces. (UploadVR) uSens Gesture & 3D HCI tracking (AR/VR, auto) Vision-based hand/pose tracking at edge for XR & mobile. (UploadVR) Startup Focus Why it matters Startup Startup Startup Focus Focus Focus Why it matters Why it matters Why it matters Smart Eye (Affectiva/iMotions) Eye/face/driver monitoring & interior sensing Automotive-grade attention & affect → safety & adaptive interfaces. (UploadVR) Smart Eye (Affectiva/iMotions) Smart Eye (Affectiva/iMotions) Smart Eye (Affectiva/iMotions) Eye/face/driver monitoring & interior sensing Eye/face/driver monitoring & interior sensing Eye/face/driver monitoring & interior sensing Automotive-grade attention & affect → safety & adaptive interfaces. (UploadVR) Automotive-grade attention & affect → safety & adaptive interfaces. (UploadVR) Automotive-grade attention & affect → safety & adaptive interfaces. (UploadVR) UploadVR UploadVR uSens Gesture & 3D HCI tracking (AR/VR, auto) Vision-based hand/pose tracking at edge for XR & mobile. (UploadVR) uSens uSens uSens Gesture & 3D HCI tracking (AR/VR, auto) Gesture & 3D HCI tracking (AR/VR, auto) Gesture & 3D HCI tracking (AR/VR, auto) Vision-based hand/pose tracking at edge for XR & mobile. (UploadVR) Vision-based hand/pose tracking at edge for XR & mobile. (UploadVR) Vision-based hand/pose tracking at edge for XR & mobile. (UploadVR) UploadVR UploadVR 6) Quick Watchlist (emerging / adjacent) Wispr — software-first but explicitly pitching a voice-native interface for the AI era; raised to build the “keyboard replacement” with AI editing. (Good bellwether for voice-as-primary UX.) (Wispr Flow) MindPortal — fNIRS “thought-to-AI” headphones; early but notable. (mindportal.com) Ultraleap / Leap Motion legacy — hand-tracking pivot; signals consolidation in the category. (UploadVR) HaptX — industrial-grade microfluidic haptic gloves; training robots/AI with rich human demonstrations. (HaptX) Wispr — software-first but explicitly pitching a voice-native interface for the AI era; raised to build the “keyboard replacement” with AI editing. (Good bellwether for voice-as-primary UX.) (Wispr Flow) Wispr voice-native interface Wispr Flow MindPortal — fNIRS “thought-to-AI” headphones; early but notable. (mindportal.com) MindPortal mindportal.com Ultraleap / Leap Motion legacy — hand-tracking pivot; signals consolidation in the category. (UploadVR) Ultraleap / Leap Motion legacy UploadVR HaptX — industrial-grade microfluidic haptic gloves; training robots/AI with rich human demonstrations. (HaptX) HaptX microfluidic haptic gloves HaptX The Road Ahead This article explores the shift from traditional screen-based HMIs to multi-sensory interactions. It highlights advancements by Meta, Apple, Google, and OpenAI, alongside lessons from past experiments like Human Pin and Rabbit R1. A detailed analysis of human senses considers information transfer, latency, acquisition difficulty, and HMI importance. The “HMI Opportunity Map” identifies innovation gaps in under-digitized yet crucial senses like touch and proprioception. Finally, it lists 40 promising HMI startups categorized by interface modality. As you navigate these new frontiers, consider how your own work, research, or investments can contribute to creating more intuitive, ethical, and truly human-centric AI interactions. We encourage you to explore the highlighted startups, delve into the cited research, and actively engage in the ongoing dialogue about shaping a future where technology seamlessly augments human capabilities across all senses.