Introduction В продължение на две десетилетия нашето взаимодействие с дигиталния свят е ограничено само до 5-инчов екран и един пръст, но какво, ако можем да се освободим от тези ограничения и да отключим пълния спектър на нашите вродени човешки сетива в нашето ежедневно изчисление? The past few years have witnessed a dramatic acceleration in the development of human-AI interface technologies, pushing the boundaries of how we interact with artificial intelligence. From immersive display technologies to intuitive wearable devices and ambitious AI-powered assistants, the landscape is rich with both groundbreaking innovations and valuable lessons from early attempts. Неотдавнашни обяви и ключови играчи: Meta Connect Announcements: Display Glass and Neuron Wristband Годишното събитие Connect на Мета последователно служи като платформа за представяне на тяхната дългосрочна визия за разширена и виртуална реалност. Въвеждането на “Display Glass” предполага бъдеще, където цифровата информация се смесва безпроблемно с нашия физически свят, вероятно предлагайки контекстни покрития и интерактивни преживявания без по-голямата част от традиционните слушалки. Допълването на това е “Neuron Wristband”, което предполага усъвършенстван метод за въвеждане, който потенциално може да интерпретира невронни сигнали или фини ръчни жестове, предлагайки по-естествен и по-малко натрапчив начин за управление на устройства и взаимодействие с AI. Тези разработки подчертават ангажимента на Мета да изгради основния хардуер за метаверсията, където Apple’s AirPods Pro with Live Translation Apple’s iterative approach to innovation often involves integrating advanced AI capabilities into their widely adopted ecosystem. The “Live Translation” feature in AirPods Pro is a prime example of this, leveraging on-device and cloud AI to break down language barriers in real-time. This not only enhances communication but also demonstrates the potential for AI to act as a personal, omnipresent interpreter, seamlessly facilitating interactions in a globalized world. It highlights a focus on practical, everyday applications of AI that enhance user experience without requiring entirely new form factors. Google’s Continued Effort in Smart Glasses Google има дълга история с интелигентните очила, от амбициозните, но в крайна сметка ограничени Google Glass до по-новите решения, фокусирани върху бизнеса. „Продължаващите усилия“ предполагат постоянна вяра в потенциала на дисплеите с глава като интерфейс между човек и ИИ. Бъдещите итерации вероятно ще се съсредоточат върху подобрени фактори на формата, подобрени възможности на ИИ за предоставяне на контекстуална информация и по-силна интеграция с широкия набор от услуги на Google, включително търсене, карти и асистенти на ИИ. Предизвикателството остава да се намери правилният баланс между полезността, социалното приемане и неприкосновеността на личния живот. OpenAI Acquires IO (Jony Ive) Придобиването от OpenAI на „IO“, дизайнерски колектив, ръководен от бившия главен дизайнер на Apple Джони Айв, е значителен стратегически ход. Това сигнализира за силно признание в рамките на водещата изследователска организация за AI, че физическото въплъщение и потребителският опит на AI системи са от решаващо значение за тяхното широко разпространение и въздействие. легендарният фокус на Ive върху минималистичния дизайн, интуитивните интерфейси и емоционалната връзка с технологиите предполага, че OpenAI се фокусира не само върху разработването на мощни модели на AI, но и върху създаването на елегантни и човешки центрирани начини за взаимодействие с тях, потенциално водещи до нови категории устройства и интерфейси, задвижвани от AI. Learning from Early Efforts: Failed experiments? Human AI Pin and Rabbit R1 на largely failed due to a combination of technical shortcomings, a high price point, and a flawed value proposition. The device was criticized for being slow, unreliable, and prone to overheating. Its primary interface, a laser-projected screen on the user’s palm, was found to be finicky and difficult to use in bright light. Humane AI Pin Furthermore, the $699 price and a mandatory $24/month subscription fee were deemed exorbitant for a device that could not reliably perform basic tasks and lacked integration with common smartphone apps and services. Ultimately, the AI Pin failed to solve a significant problem for consumers and was widely seen as an inferior, redundant gadget compared to the smartphones already in their pockets. на R1’s failure can be attributed to its inability to deliver on its core promises and its fundamental lack of purpose. The device was heavily marketed as a “Large Action Model”-powered tool that could control apps and services on the user’s behalf, but at launch, it only supported a handful of apps and failed at many basic tasks. Reviewers noted poor battery life, sluggish performance, and an awkward user interface. Rabbit Твърдението на компанията, че нейното устройство не е просто приложение за смартфони, беше подкопано, когато беше разкрито, че целият интерфейс работи на едно приложение за Android, което повдига въпроса защо е необходим дори специален хардуер. Гледай напред в тази статия: Еволюцията на интерфейсите между човек и ИИ е динамично поле, характеризиращо се с бързи експерименти и непрекъснато усъвършенстване.Как да се придържате към най-новите разработки и да останете една стъпка пред кривата? В следващите глави ще започнем с дълбоко потапяне в интерфейса човек-машина в контекста на AI. Това ще бъде последвано от анализ на възможностите за бъдещия HMI, фокусиран върху AI, както и преглед на 40 компании, категоризирани според сетивата, към които се обръщат. Human Machine interface — a deep dive Comparative Table of Human Senses for HMI Sense Approx. Info Transfer Speed (bandwidth) Typical Latency (biological) Electronic Acquisition Difficulty Importance for HMI (why it matters) Vision ~10–100 Mbps equivalent (retina: ~1M ganglion cells × ~10 Hz avg firing; peak ~10⁸ bits/s raw, but compressed) ~10–50 ms (visual processing lag, saccade update ≈ 30–70 ms) Medium: cameras capture pixels easily, but depth, semantics, and robustness (lighting, occlusion) are hard Highest: most dominant sense; AR/VR, robot teleoperation, situational awareness. Hearing (Audition) ~10–100 kbps effective (20 Hz–20 kHz, dynamic range ~120 dB, compressed equivalent ~128 kbps MP3 quality) ~1–5 ms for cochlea–nerve, ~20–30 ms conscious perception Easy: microphones replicate frequency & amplitude well, but spatial hearing (3D localization, reverberation) is harder High: essential for speech, alerts, immersive UX; natural channel for AI assistants. Touch (Haptics, cutaneous) ~1–10 Mbps (skin has ~17,000 mechanoreceptors in hand; up to 1 kHz sensitivity) ~5–20 ms (nerve conduction 30–70 m/s) Hard: tactile sensors exist, but resolution, softness, temperature, multi-modal feel are challenging High: critical for manipulation, VR/AR realism, prosthetics. Proprioception (body position, muscle/joint sense) ~100–1000 kbps (dozens of muscle spindles & Golgi organs firing continuously) ~10–50 ms Hard: requires motion capture, IMUs, EMG, complex fusion Very High: essential for embodiment, robotics teleop, XR presence. Vestibular (balance, acceleration, rotation) ~10–100 kbps (3 semicircular canals + 2 otolith organs) ~5–10 ms (extremely fast reflex loop for balance) Hard: gyros/accelerometers replicate linear/angular acceleration, but inducing realistic vestibular feedback is very hard Medium–High: important for XR realism; mismatch causes motion sickness. Smell (Olfaction) ~1–10 bps (≈ 400 receptor types, slow temporal coding) ~400–600 ms (perceptual lag) Very Hard: requires chemical sensing or odor synthesis, limited replicability Low–Medium: niche (immersive VR, food, medical diagnostics). Taste (Gustation) ~1–10 bps (5 receptor types, slow integration) ~500–1000 ms Very Hard: chemical stimulation only, few practical electronic taste displays Low: niche (culinary VR, medical). Interoception (internal state: hunger, heartbeat, breath, gut signals) Low bandwidth (<1 bps conscious; autonomic streams richer but subconscious) Seconds–minutes Very Hard: bio-signals accessible via ECG, PPG, hormone sensors, but incomplete Medium: useful for health-aware HMIs, adaptive AI. Thermoception (temperature) ~1–10 kbps ~50–200 ms Medium–Hard: thermal actuators exist, but slow response & safety constraints Medium: enhances immersion, but not primary channel. Nociception (pain) Not a “data channel” but a strong aversive signal ~100–300 ms Not desirable: pain induction ethically problematic Low: only as safety feedback in prosthetics. Vision ~10–100 Mbps equivalent (retina: ~1M ganglion cells × ~10 Hz avg firing; peak ~10⁸ bits/s raw, but compressed) ~10–50 ms (visual processing lag, saccade update ≈ 30–70 ms) Medium: cameras capture pixels easily, but depth, semantics, and robustness (lighting, occlusion) are hard Highest: most dominant sense; AR/VR, robot teleoperation, situational awareness. Hearing (Audition) ~10–100 kbps effective (20 Hz–20 kHz, dynamic range ~120 dB, compressed equivalent ~128 kbps MP3 quality) ~1–5 ms for cochlea–nerve, ~20–30 ms conscious perception Easy: microphones replicate frequency & amplitude well, but spatial hearing (3D localization, reverberation) is harder High: essential for speech, alerts, immersive UX; natural channel for AI assistants. Touch (Haptics, cutaneous) ~1–10 Mbps (skin has ~17,000 mechanoreceptors in hand; up to 1 kHz sensitivity) ~5–20 ms (nerve conduction 30–70 m/s) Hard: tactile sensors exist, but resolution, softness, temperature, multi-modal feel are challenging High: critical for manipulation, VR/AR realism, prosthetics. Proprioception (body position, muscle/joint sense) ~100–1000 kbps (dozens of muscle spindles & Golgi organs firing continuously) ~10–50 ms Hard: requires motion capture, IMUs, EMG, complex fusion Very High: essential for embodiment, robotics teleop, XR presence. Vestibular (balance, acceleration, rotation) ~10–100 kbps (3 semicircular canals + 2 otolith organs) ~5–10 ms (extremely fast reflex loop for balance) Hard: gyros/accelerometers replicate linear/angular acceleration, but inducing realistic vestibular feedback is very hard Medium–High: important for XR realism; mismatch causes motion sickness. Smell (Olfaction) ~1–10 bps (≈ 400 receptor types, slow temporal coding) ~400–600 ms (perceptual lag) Very Hard: requires chemical sensing or odor synthesis, limited replicability Low–Medium: niche (immersive VR, food, medical diagnostics). Taste (Gustation) ~1–10 bps (5 receptor types, slow integration) ~500–1000 ms Very Hard: chemical stimulation only, few practical electronic taste displays Low: niche (culinary VR, medical). Interoception (internal state: hunger, heartbeat, breath, gut signals) Low bandwidth (<1 bps conscious; autonomic streams richer but subconscious) Seconds–minutes Very Hard: bio-signals accessible via ECG, PPG, hormone sensors, but incomplete Medium: useful for health-aware HMIs, adaptive AI. Thermoception (temperature) ~1–10 kbps ~50–200 ms Medium–Hard: thermal actuators exist, but slow response & safety constraints Medium: enhances immersion, but not primary channel. Nociception (pain) Not a “data channel” but a strong aversive signal ~100–300 ms Not desirable: pain induction ethically problematic Low: only as safety feedback in prosthetics. Key Observations — orders of magnitude higher than other senses, but also easiest to overload (cognitive bottleneck at ~40–60 bps for conscious reading/listening). Vision dominates bandwidth : vestibular & proprioception are — latency below ~20 ms is essential, otherwise motion sickness / disembodiment occurs. Vision tolerates 50–100 ms in UX. Latency matters differently fast reflexive senses : Electronic acquisition : vision (cameras), hearing (mics). Easy Средно: докосване (поредици от сензори за налягане, хаптични актуатори). Твърда: вестибуларна (невъзможност за обратна връзка без инвазивни или въртящи се устройства), proprioception (изисква мултимодално усещане), миризма / вкус (химичен). Значение на HMI: Core: . Vision, Hearing, Touch, Proprioception, Vestibular Niche / emerging: . Smell, Taste, Interoception, Thermoception Critical distinction: — we can vision & hearing easily, but in touch/haptics & vestibular is much harder. input vs output sense delivering feedback HMI сензорни радари dominates in bandwidth & importance, with medium acquisition difficulty. Vision Слухът предлага отлична латентност и лесно придобиване. have high importance but are technically hard to digitize. Touch + Proprioception scores high on latency sensitivity but is very difficult to reproduce electronically. Vestibular Smell & Taste се намира в ъгъла с ниска честотна лента, висока трудност и ниско значение (ниша). Интервюта и термоцепцията попадат между тях - ценни предимно за здравето или за потапящата обратна връзка. HMI opportunity map Влияние върху AI-HMI Design AI интерфейси Днес (краткосрочно): визията + слуха доминират (AR очила, гласови агенти), но жестовете, докосването, микродвиженията са новата граница. : haptics (Afference neural haptics, HaptX gloves), silent-speech (AlterEgo), proprioception mapping (IMU + EMG), vestibular tricks (electro-stimulation). Near-term Breakthroughs : smell/taste/interoception → highly niche but can create hyper-immersive XR or health-aware AI companions. Far-term Bottleneck: Хората не могат съзнателно да обработват никъде близо до сензорната сурова честотна лента - дизайнът на HMI трябва да се компресира към това, което е полезно, интуитивно и с ниска латентност. HMI Opportunity Map → high bandwidth, low acquisition difficulty → already well-covered, but incremental AI/UX improvements matter. Bottom-left (Vision, Hearing) Топ-дясно (Vestibular, Proprioception, Touch) → висока честотна лента / значение, но трудно да се придобие по електронен път → най-големите възможности за иновации. → low bandwidth, very hard, low importance → niche applications only. Smell & Taste Interoception & Thermoception → умерена ниша, особено за здравносъзнателни или потапящи HMI. "Сладката точка" за бъдещите стартиращи компании се крие в — biggest gap between potential value an`d current tech maturity. making hard-to-digitize senses (touch, balance, body sense) usable for AI interfaces Най-големи възможности за подпомагане на иновациите на HMI: I’ve ranked the senses by (the difference between their theoretical potential and today’s opportunity score). Innovation Gap Визия – вече доминираща, но все още оставя най-голямата пропаст (компресия, семантика и увеличение, задвижвани от AI). Проприоцепция – огромен потенциал, но много труден за улавяне; отключването му може да трансформира XR/роботиката. — high payoff if electronic haptics and tactile sensing improve. Touch — strong today but still under-optimized (spatial, multimodal, selective hearing AI). Hearing — critical for immersion, but remains technically difficult. Vestibular 40 promising HMI startups to watch Ето куриран, актуализиран пейзаж на изграждането на AI-хардуерни стартапи (HMI) за епохата на AI. Аз ги групирах по интерфейс и маркирах . I focused on 2024–2025 developments and included links/citations so you can dig deeper fast. new human–machine interfaces form factor, what’s new, and stage First let’s have a look at the AI HMI startup landscape in terms of bandwidth vs acquisition difficulty: 1) Silent-speech, neural/nerve & micro-gesture input (non-invasive) Startup Modality & Form Factor What’s new / why it matters Stage / Notes AlterEgo Sub-vocal “silent speech” via cranial/neuromuscular signals; over-ear/behind-head wearable Public debut of for silent dictation & AI querying at “thought-speed”; demos show silent two-way comms & device control. ( ) Silent Sense Axios Newly out of stealth; product details pending. Augmental (MouthPad^ ) Tongue + head-gesture touchpad (roof of mouth) in-mouth Hands-free cursor/clicks; active roadmap on head-tracking & silent-speech; raised seed in late 2023. ( ) MIT News Shipping to early users; assistive & creator workflows. Wearable Devices (Mudra Band / Mudra Link) Neural/EMG-like wristbands (Apple Watch band + cross-platform Link) CES 2025 Innovation Award; Link opens OS-agnostic neural input; dev kit & distribution deals. ( ) CES Public company (WLDS); consumer + XR partners. Doublepoint Micro-gesture recognition from watches/wristbands turns Apple Watch into spatial mouse; eye-tracking + pinch “look-then-tap” UX. ( ) WowMouse TechCrunch App live; SDK for OEMs & XR makers. Wisear Neural interface in earbuds (jaw/eye micro-movements; roadmap to neural) “Neural clicks” for XR/earbuds; first Wisearphones planned; licensing to OEMs. ( ) wisear.io Late-stage prototypes; announced timelines & pilots. Afference (Phantom / Ring) Neural haptics (output!) via fingertip rings/glove stimulating nerves CES award-winner; creates artificial touch without bulky gloves; neural haptics reference ring. ( ) Interesting Engineering Early funding; working with XR & research labs. AlterEgo Sub-vocal “silent speech” via cranial/neuromuscular signals; over-ear/behind-head wearable Public debut of for silent dictation & AI querying at “thought-speed”; demos show silent two-way comms & device control. ( ) Silent Sense Axios Тихият смисъл Axios Axios Newly out of stealth; product details pending. Augmental (MouthPad^ ) Tongue + head-gesture touchpad (roof of mouth) in-mouth in-mouth Hands-free cursor/clicks; active roadmap on head-tracking & silent-speech; raised seed in late 2023. ( ) MIT News С новините С новините Shipping to early users; assistive & creator workflows. Wearable Devices (Mudra Band / Mudra Link) Neural/EMG-like wristbands (Apple Watch band + cross-platform Link) CES 2025 Innovation Award; Link opens OS-agnostic neural input; dev kit & distribution deals. ( ) CES CES Тези Public company (WLDS); consumer + XR partners. Doublepoint Micro-gesture recognition from watches/wristbands turns Apple Watch into spatial mouse; eye-tracking + pinch “look-then-tap” UX. ( ) WowMouse TechCrunch WowMouse TechCrunch Техникът App live; SDK for OEMs & XR makers. Wisear Neural interface in earbuds (jaw/eye micro-movements; roadmap to neural) “Neural clicks” for XR/earbuds; first Wisearphones planned; licensing to OEMs. ( ) wisear.io wisear.io wisear.io Late-stage prototypes; announced timelines & pilots. Afference (Phantom / Ring) Neural haptics (output!) via fingertip rings/glove stimulating nerves CES award-winner; creates artificial touch without bulky gloves; neural haptics reference ring. ( ) Interesting Engineering Interesting Engineering Интересно инженерство Early funding; working with XR & research labs. 2) Non-invasive neurotech / everyday BCI wearables Startup Modality & Form Factor What’s new / why it matters Stage / Notes Neurable EEG + AI in headphones (MW75 Neuro line) Commercial “brain-tracking” ANC headphones measuring focus; productivity & health insights. ( ) Master & Dynamic Shipping (US); scaling to EU/UK. OpenBCI (Galea, cEEGrid, Ultracortex) Research-grade biosensing headsets; around-ear EEG kits Galea (EEG/EOG/EMG/EDA) integrates with XR; dev kits for labs & startups. ( ) OpenBCI Shop Hardware available; strong dev ecosystem. EMOTIV EEG headsets & MN8 EEG earbuds Newer consumer & research lines (Insight/EPOC X; MN8 earbuds) used in UX, wellness, research. ( ) EMOTIV ** Mature startup; DTC + enterprise. InteraXon (Muse) EEG headbands; new Muse S “Athena” EEG+fNIRS Adds fNIRS to consumer headband → better focus/sleep metrics & neurofeedback. ( ) Muse: the brain sensing headband Shipping; wellness & performance verticals. Cognixion (ONE) Non-invasive BCI + AR speech headset Uses BCI with flashing visual patterns + AI to speak/control smart home; ALS use-cases. ( ) Cognixion Assistive comms pilots; clinical focus. MindPortal fNIRS-based “telepathic AI” headphones (R&D) Targeting thought-to-AI interfaces with non-invasive optical signals. ( ) mindportal.com Early stage; dev previews & interviews. NexStem EEG headsets + SDK Low-cost BCI kits for devs & research; HMI demos. ( ) nexstem.ai Developer community growing. Raised a seed round in April 2025 Neurable EEG + AI in headphones (MW75 Neuro line) Commercial “brain-tracking” ANC headphones measuring focus; productivity & health insights. ( ) Master & Dynamic Master & Dynamic Магистър и динамик Shipping (US); scaling to EU/UK. OpenBCI (Galea, cEEGrid, Ultracortex) Research-grade biosensing headsets; around-ear EEG kits Galea (EEG/EOG/EMG/EDA) integrates with XR; dev kits for labs & startups. ( ) OpenBCI Shop OpenBCI Shop OpenBCI Shop Hardware available; strong dev ecosystem. EMOTIV EEG headsets & MN8 EEG earbuds Newer consumer & research lines (Insight/EPOC X; MN8 earbuds) used in UX, wellness, research. ( ) EMOTIV EMOTIV EMOTIV ** Mature startup; DTC + enterprise. InteraXon (Muse) EEG headbands; new Muse S “Athena” EEG+fNIRS Adds fNIRS to consumer headband → better focus/sleep metrics & neurofeedback. ( ) Muse: the brain sensing headband Муза: мозъчната сензорна лента Muse: the brain sensing headband Shipping; wellness & performance verticals. Cognixion (ONE) Non-invasive BCI + AR speech headset Uses BCI with flashing visual patterns + AI to speak/control smart home; ALS use-cases. ( ) Cognixion Познанието Познанието Assistive comms pilots; clinical focus. MindPortal fNIRS-based “telepathic AI” headphones (R&D) Targeting thought-to-AI interfaces with non-invasive optical signals. ( ) mindportal.com от mindportal.com mindportal.com Early stage; dev previews & interviews. NexStem EEG headsets + SDK Low-cost BCI kits for devs & research; HMI demos. ( ) nexstem.ai Нептун.bg nexstem.ai Developer community growing. Raised a seed round in April 2025 3) Minimally-invasive & invasive BCI (clinical first, consumer later) Startup Modality & Form Factor What’s new / why it matters Stage / Notes Synchron Endovascular stentrode (via blood vessel → motor cortex) Pairing with NVIDIA? AI to improve decoding; ALS users controlling home devices. ( ) WIRED Human trials; lower surgical burden vs open-brain. Precision Neuroscience Thin-film cortical surface array (~1024 electrodes) “Layer 7” interface sits on cortex w/o penetrating; speech/motor decoding. ( ) WIRED The company received FDA clearance for the device and has implanted it in 37 patients as of April 2025 Paradromics High-bandwidth implant (“Connexus”) First human test (May 14, 2025); compact 420-electrode array aimed at speech/typing. ( ) WIRED Moving toward long-term trials. Neuralink Penetrating micro-electrode implant + robot surgery Large funding; parallel human trials race; long-horizon consumer HMI. ( ) Bioworld Clinical; significant visibility. Blackrock Neurotech Utah-array implants & ecosystems Deep install base in research/clinical BCI. ( ) Tracxn Clinical research leader. acquired by Tether in April 2024 Synchron Endovascular stentrode (via blood vessel → motor cortex) Pairing with NVIDIA? AI to improve decoding; ALS users controlling home devices. ( ) WIRED Връзки WIRED Human trials; lower surgical burden vs open-brain. Precision Neuroscience Thin-film cortical surface array (~1024 electrodes) “Layer 7” interface sits on cortex w/o penetrating; speech/motor decoding. ( ) WIRED Връзки WIRED The company received FDA clearance for the device and has implanted it in 37 patients as of April 2025 Paradromics High-bandwidth implant (“Connexus”) First human test (May 14, 2025); compact 420-electrode array aimed at speech/typing. ( ) WIRED WIRED WIRED Moving toward long-term trials. Neuralink Penetrating micro-electrode implant + robot surgery Large funding; parallel human trials race; long-horizon consumer HMI. ( ) Bioworld Биологичен свят Bioworld Clinical; significant visibility. Blackrock Neurotech Utah-array implants & ecosystems Deep install base in research/clinical BCI. ( ) Tracxn Тракция Тракция Clinical research leader. acquired by Tether in April 2024 4) AR glasses, AI wearables & spatial computers (new UX canvases) Startup Device What’s new / why it matters Stage / Notes Brilliant Labs (Frame/Halo) Open smart glasses + cloud AI agent Open hardware/software for devs; lightweight daily-use AR + AI. ( ) Forbes Shipping early units; active community. Rokid Light AR/AI glasses New glasses at IFA 2025: on-glasses AI, dual micro-LED displays; live translation, nav, GPT. ( ) Tom's Guide New model announced; consumer price point. Sightful (Spacetop) Screenless laptop + AR workspace Spacetop G1 (and Windows variant) → private, portable 100" desktop; AR productivity UX. ( ) WIRED Preorders / rolling availability. Limitless (Pendant) Wearable voice lifelogger + AI memory Records/organizes your day; context memory for assistant; Android app rolling out. ( ) Limitless Actively shipping units for iOS and has an Android app planned for late 2025. Rabbit (R1) Pocket AI device (LAM-driven) Major RabbitOS 2 UX overhaul; generative UI & new actions after rocky launch. ( ) 9to5Google Over 130K devices shipped, but DAU hover around 5,000 as of August 2025 Humane (Ai Pin) Projector pin wearable Cautionary tale—service shutdown & HP acquisition (illustrates pitfalls of new AI UX). ( ) WIRED Humane ceased sales of the Ai Pin in February 2025 and sold most of its assets to HP. The service for the Ai Pin was also shut down. Brilliant Labs (Frame/Halo) Open smart glasses + cloud AI agent Open hardware/software for devs; lightweight daily-use AR + AI. ( ) Forbes Forbes Forbes Shipping early units; active community. Rokid Light AR/AI glasses New glasses at IFA 2025: on-glasses AI, dual micro-LED displays; live translation, nav, GPT. ( ) Tom's Guide Tom's Guide Пътеводител на Том New model announced; consumer price point. Sightful (Spacetop) Screenless laptop + AR workspace Spacetop G1 (and Windows variant) → private, portable 100" desktop; AR productivity UX. ( ) WIRED Връзки WIRED Preorders / rolling availability. Limitless (Pendant) Wearable voice lifelogger + AI memory Records/organizes your day; context memory for assistant; Android app rolling out. ( ) Limitless Неограничен Limitless Actively shipping units for iOS and has an Android app planned for late 2025. Rabbit (R1) Pocket AI device (LAM-driven) Major RabbitOS 2 UX overhaul; generative UI & new actions after rocky launch. ( ) 9to5Google 5 от Google 9to5Google Over 130K devices shipped, but DAU hover around 5,000 as of August 2025 Humane (Ai Pin) Projector pin wearable Cautionary tale—service shutdown & HP acquisition (illustrates pitfalls of new AI UX). ( ) WIRED Връзки Връзки Humane ceased sales of the Ai Pin in February 2025 and sold most of its assets to HP. The service for the Ai Pin was also shut down. 5) Eye-, face- & driver-state sensing (affect-aware, context-aware UX) Startup Focus Why it matters Smart Eye (Affectiva/iMotions) Eye/face/driver monitoring & interior sensing Automotive-grade attention & affect → safety & adaptive interfaces. ( ) UploadVR uSens Gesture & 3D HCI tracking (AR/VR, auto) Vision-based hand/pose tracking at edge for XR & mobile. ( ) UploadVR Smart Eye (Affectiva/iMotions) Eye/face/driver monitoring & interior sensing Automotive-grade attention & affect → safety & adaptive interfaces. ( ) UploadVR UploadVR UploadVR uSens Gesture & 3D HCI tracking (AR/VR, auto) Vision-based hand/pose tracking at edge for XR & mobile. ( ) UploadVR Възпроизвеждане UploadVR 6) Quick Watchlist (emerging / adjacent) — software-first but explicitly pitching a for the AI era; raised to build the “keyboard replacement” with AI editing. (Good bellwether for voice-as-primary UX.) ( ) Wispr voice-native interface Wispr Flow — fNIRS “thought-to-AI” headphones; early but notable. ( ) MindPortal mindportal.com Ultraleap / Leap Motion наследство — ръчно проследяване на пиво; консолидация на сигналите в категорията. (UploadVR) HaptX - микрофлуидни хаптични ръкавици от индустриален клас; роботи за обучение / AI с богати човешки демонстрации. (HaptX) The Road Ahead This article explores the shift from traditional screen-based HMIs to multi-sensory interactions. It highlights advancements by Meta, Apple, Google, and OpenAI, alongside lessons from past experiments like Human Pin and Rabbit R1. A detailed analysis of human senses considers information transfer, latency, acquisition difficulty, and HMI importance. The “HMI Opportunity Map” identifies innovation gaps in under-digitized yet crucial senses like touch and proprioception. Finally, it lists 40 promising HMI startups categorized by interface modality. As you navigate these new frontiers, consider how your own work, research, or investments can contribute to creating more intuitive, ethical, and truly human-centric AI interactions. We encourage you to explore the highlighted startups, delve into the cited research, and actively engage in the ongoing dialogue about shaping a future where technology seamlessly augments human capabilities across all senses.