Introduktion I to årtier har vores interaktion med den digitale verden været begrænset til kun en 5-tommers skærm og en enkelt fingertip. Men hvad hvis vi kunne bryde fri fra disse begrænsninger og låse det fulde spektrum af vores medfødte menneskelige sanser i vores daglige computing? De sidste par år har været vidne til en dramatisk acceleration i udviklingen af menneske-AI-grænsefladeteknologier, der skubber grænserne for, hvordan vi interagerer med kunstig intelligens.Fra immersive displayteknologier til intuitive bærbare enheder og ambitiøse AI-drevne assistenter er landskabet rig på både banebrydende innovationer og værdifulde lektioner fra tidlige forsøg. Seneste annoncer og nøglespillere: Meta Connect Announcements: Display Glass and Neuron Wristband Meta's årlige Connect-event har konsekvent tjent som en platform for at vise deres langsigtede vision for augmented og virtuel virkelighed. Introduktionen af "Display Glass" antyder en fremtid, hvor digital information sømløst blander sig med vores fysiske verden, sandsynligvis tilbyder kontekstuelle overlapper og interaktive oplevelser uden størstedelen af traditionelle headsets. Komplementerer dette med "Neuron Wristband", hvilket foreslår en avanceret inputmetode, der potentielt kunne fortolke neurale signaler eller subtile håndgester, hvilket tilbyder en mere naturlig og mindre påtrængende måde at styre enheder og interagere med AI. Disse udviklinger understreger Meta's forpligtelse til at opbygge den grundlæggende hardware til metaverset, hvor menneske-AI-interaktion vil Apple’s AirPods Pro with Live Translation Apples iterative tilgang til innovation involverer ofte integration af avancerede AI-funktioner i deres bredt anvendte økosystem. Funktionen "Live Translation" i AirPods Pro er et fremragende eksempel på dette, der udnytter on-device og cloud AI til at bryde sprogbarrierer i realtid. Dette forbedrer ikke kun kommunikationen, men demonstrerer også potentialet for AI til at fungere som en personlig, allestedsnærværende tolk, der problemfrit letter interaktioner i en globaliseret verden. Google’s Continued Effort in Smart Glasses Google har en lang historie med smarte briller, fra de ambitiøse, men i sidste ende begrænsede Google Glass til nyere, enterprise-fokuserede løsninger. "kontinuerlig indsats" antyder en vedvarende tro på potentialet for hovedmonterede skærme som et menneske-AI-grænseflade. Fremtidige iterationer vil sandsynligvis fokusere på forbedrede formfaktorer, forbedrede AI-kapaciteter til kontekstuel informationslevering og mere robust integration med Googles store udvalg af tjenester, herunder søgning, kort og AI-assistenter. OpenAI Acquires IO (Jony Ive) OpenAI's opkøb af "IO", et designkollektiv ledet af tidligere Apple chief design officer Jony Ive, er et betydeligt strategisk skridt. Dette signalerer en stærk anerkendelse inden for den førende AI-forskningsorganisation, at den fysiske gennemførelse og brugeroplevelse af AI-systemer er afgørende for deres udbredte vedtagelse og indvirkning. Ive's legendariske fokus på minimalistisk design, intuitive grænseflader og følelsesmæssig forbindelse til teknologi tyder på, at OpenAI ikke kun er fokuseret på at udvikle kraftfulde AI-modeller, men også på at skabe elegante og menneskecentriske måder for folk at interagere med dem, hvilket potentielt fører til nye kategorier af AI-drevne enheder og grænseflader. Læring fra tidlige bestræbelser: Failed experiments? Human AI Pin and Rabbit R1 Den Den blev kritiseret for at være langsom, upålidelig og tilbøjelig til overophedning. dens primære grænseflade, en laserprojekteret skærm på brugerens palme, blev fundet at være fin og vanskelig at bruge i stærkt lys. Humane AI Pin Desuden blev prisen på $ 699 og et obligatorisk abonnementsgebyr på $ 24 / måned anset for at være overdreven for en enhed, der ikke kunne udføre grundlæggende opgaver pålideligt og manglede integration med almindelige smartphone-apps og -tjenester. Den R1's fiasko kan tilskrives dets manglende evne til at levere sine kerneværdier og dets grundlæggende mangel på formål. Enheden blev stærkt markedsført som et "Large Action Model"-drevet værktøj, der kunne styre apps og tjenester på brugerens vegne, men ved lanceringen understøttede den kun en håndfuld apps og mislykkedes i mange grundlæggende opgaver. Rabbit Virksomhedens påstand om, at dens enhed ikke kun var en smartphone-app, blev undermineret, da det blev afsløret, at hele grænsefladen kørte på en enkelt Android-app, hvilket rejste spørgsmålet om, hvorfor dedikeret hardware endda var nødvendig. Se fremad i denne artikel: Udviklingen af menneske-AI-grænseflader er et dynamisk felt karakteriseret ved hurtig eksperimentering og kontinuerlig forfining. I de følgende kapitler vil vi starte med en dyb nedsænkning i menneske-maskine-grænsefladen i forbindelse med AI. Dette vil blive efterfulgt af en mulighedsanalyse for fremtidens AI-fokuserede HMI, samt en oversigt over 40 virksomheder kategoriseret efter de sanser, de adresserer. Human Machine-grænseflade – en dyb dykke Sammenlignende tabel over menneskelige sanser for HMI Sense Approx. Info Transfer Speed (bandwidth) Typical Latency (biological) Electronic Acquisition Difficulty Importance for HMI (why it matters) Vision ~10–100 Mbps equivalent (retina: ~1M ganglion cells × ~10 Hz avg firing; peak ~10⁸ bits/s raw, but compressed) ~10–50 ms (visual processing lag, saccade update ≈ 30–70 ms) Medium: cameras capture pixels easily, but depth, semantics, and robustness (lighting, occlusion) are hard Highest: most dominant sense; AR/VR, robot teleoperation, situational awareness. Hearing (Audition) ~10–100 kbps effective (20 Hz–20 kHz, dynamic range ~120 dB, compressed equivalent ~128 kbps MP3 quality) ~1–5 ms for cochlea–nerve, ~20–30 ms conscious perception Easy: microphones replicate frequency & amplitude well, but spatial hearing (3D localization, reverberation) is harder High: essential for speech, alerts, immersive UX; natural channel for AI assistants. Touch (Haptics, cutaneous) ~1–10 Mbps (skin has ~17,000 mechanoreceptors in hand; up to 1 kHz sensitivity) ~5–20 ms (nerve conduction 30–70 m/s) Hard: tactile sensors exist, but resolution, softness, temperature, multi-modal feel are challenging High: critical for manipulation, VR/AR realism, prosthetics. Proprioception (body position, muscle/joint sense) ~100–1000 kbps (dozens of muscle spindles & Golgi organs firing continuously) ~10–50 ms Hard: requires motion capture, IMUs, EMG, complex fusion Very High: essential for embodiment, robotics teleop, XR presence. Vestibular (balance, acceleration, rotation) ~10–100 kbps (3 semicircular canals + 2 otolith organs) ~5–10 ms (extremely fast reflex loop for balance) Hard: gyros/accelerometers replicate linear/angular acceleration, but inducing realistic vestibular feedback is very hard Medium–High: important for XR realism; mismatch causes motion sickness. Smell (Olfaction) ~1–10 bps (≈ 400 receptor types, slow temporal coding) ~400–600 ms (perceptual lag) Very Hard: requires chemical sensing or odor synthesis, limited replicability Low–Medium: niche (immersive VR, food, medical diagnostics). Taste (Gustation) ~1–10 bps (5 receptor types, slow integration) ~500–1000 ms Very Hard: chemical stimulation only, few practical electronic taste displays Low: niche (culinary VR, medical). Interoception (internal state: hunger, heartbeat, breath, gut signals) Low bandwidth (<1 bps conscious; autonomic streams richer but subconscious) Seconds–minutes Very Hard: bio-signals accessible via ECG, PPG, hormone sensors, but incomplete Medium: useful for health-aware HMIs, adaptive AI. Thermoception (temperature) ~1–10 kbps ~50–200 ms Medium–Hard: thermal actuators exist, but slow response & safety constraints Medium: enhances immersion, but not primary channel. Nociception (pain) Not a “data channel” but a strong aversive signal ~100–300 ms Not desirable: pain induction ethically problematic Low: only as safety feedback in prosthetics. Vision ~10–100 Mbps equivalent (retina: ~1M ganglion cells × ~10 Hz avg firing; peak ~10⁸ bits/s raw, but compressed) ~10–50 ms (visual processing lag, saccade update ≈ 30–70 ms) Medium: cameras capture pixels easily, but depth, semantics, and robustness (lighting, occlusion) are hard Highest: most dominant sense; AR/VR, robot teleoperation, situational awareness. Hearing (Audition) ~10–100 kbps effective (20 Hz–20 kHz, dynamic range ~120 dB, compressed equivalent ~128 kbps MP3 quality) ~1–5 ms for cochlea–nerve, ~20–30 ms conscious perception Easy: microphones replicate frequency & amplitude well, but spatial hearing (3D localization, reverberation) is harder High: essential for speech, alerts, immersive UX; natural channel for AI assistants. Touch (Haptics, cutaneous) ~1–10 Mbps (skin has ~17,000 mechanoreceptors in hand; up to 1 kHz sensitivity) ~5–20 ms (nerve conduction 30–70 m/s) Hard: tactile sensors exist, but resolution, softness, temperature, multi-modal feel are challenging High: critical for manipulation, VR/AR realism, prosthetics. Proprioception (body position, muscle/joint sense) ~100–1000 kbps (dozens of muscle spindles & Golgi organs firing continuously) ~10–50 ms Hard: requires motion capture, IMUs, EMG, complex fusion Very High: essential for embodiment, robotics teleop, XR presence. Vestibular (balance, acceleration, rotation) ~10–100 kbps (3 semicircular canals + 2 otolith organs) ~5–10 ms (extremely fast reflex loop for balance) Hard: gyros/accelerometers replicate linear/angular acceleration, but inducing realistic vestibular feedback is very hard Medium–High: important for XR realism; mismatch causes motion sickness. Smell (Olfaction) ~1–10 bps (≈ 400 receptor types, slow temporal coding) ~400–600 ms (perceptual lag) Very Hard: requires chemical sensing or odor synthesis, limited replicability Low–Medium: niche (immersive VR, food, medical diagnostics). Taste (Gustation) ~1–10 bps (5 receptor types, slow integration) ~500–1000 ms Very Hard: chemical stimulation only, few practical electronic taste displays Low: niche (culinary VR, medical). Interoception (internal state: hunger, heartbeat, breath, gut signals) Low bandwidth (<1 bps conscious; autonomic streams richer but subconscious) Seconds–minutes Very Hard: bio-signals accessible via ECG, PPG, hormone sensors, but incomplete Medium: useful for health-aware HMIs, adaptive AI. Thermoception (temperature) ~1–10 kbps ~50–200 ms Medium–Hard: thermal actuators exist, but slow response & safety constraints Medium: enhances immersion, but not primary channel. Nociception (pain) Not a “data channel” but a strong aversive signal ~100–300 ms Not desirable: pain induction ethically problematic Low: only as safety feedback in prosthetics. Nøgleobservationer Synet dominerer båndbredde - størrelsesordener højere end andre sanser, men også lettest at overbelaste (kognitiv flaskehalse ved ~40-60 bps for bevidst læsning / lytning). Latens betyder noget andet: vestibulær og proprioception er hurtige reflekterende sanser - latens under ~20 ms er afgørende, ellers forekommer bevægelsessygdom/desembodiment. Elektronisk indkøb: Let: syn (kameraer ) og hørelse ( mikrofoner ) Medium: berøring (rækker af tryk sensorer, haptiske aktuatorer). Hard: vestibulær (feedback umulig uden invasive eller roterende rigs), proprioception (kræver multimodal sensing), lugt / smag (kemisk). Vigtigheden af HMI: Kernen er syn, hørelse, berøring, egenopfattelse og vestibulær. Niche / fremvoksende: Lugt, smag, interception, termoception. Kritisk forskel: input vs output - vi kan føle syn og hørelse nemt, men at levere feedback i touch / haptics & vestibular er meget sværere. HMI Sensorium Radar Vision dominerer i båndbredde og betydning, med medium erhvervelse vanskeligheder. Hørsel giver fremragende latens og nem erhvervelse. Touch + Proprioception har stor betydning, men er teknisk vanskeligt at digitalisere. Vestibular scorer højt på latensfølsomhed, men er meget vanskeligt at reproducere elektronisk. Smell & Taste sidder på den lave båndbredde, høj sværhedsgrad, lav betydning hjørne (niche). Interoception & Thermoception falder mellem - værdifulde hovedsageligt for sundhed eller immersiv feedback. HMI mulighed kort Implikationer for AI-HMI Design AI-grænseflader i dag (kortsigtet): Vision + hørelse dominerer (AR-briller, stemmeagenter), men gestus, berøring, mikrobevægelser er den nye grænse. Kortsigtede gennembrud: haptikker (Afference neurale haptikker, HaptX handsker), tavs tale (AlterEgo), proprioception mapping (IMU + EMG), vestibulære tricks (elektro-stimulering). Langsigtet: lugt / smag / interoception → høj niche, men kan skabe hyper-immersive XR eller sundhedsbevidste AI følgesvend. Bottleneck: Mennesker kan ikke bevidst behandle overalt i nærheden af den sensoriske rå båndbredde - HMI-design skal komprimere til det, der er nyttigt, intuitivt og med lav latens. HMI Mulighedskort Bottom-left (Vision, Hearing) → høj båndbredde, lav erhvervelse vanskelighed → allerede godt dækket, men incrementelle AI / UX forbedringer betyder noget. Top-right (Vestibular, Proprioception, Touch) → høj båndbredde / betydning, men svært at erhverve elektronisk → største innovationsmuligheder. Smell & Taste → lav båndbredde, meget hård, lav betydning → niche applikationer kun. Interoception & Thermoception → moderat niche, især for sundhedsbevidste eller immersive HMI'er. Den "søde plet" for fremtidige startups ligger i den største forskel mellem den potentielle værdi af den nuværende teknologiske modenhed. making hard-to-digitize senses (touch, balance, body sense) usable for AI interfaces De største underbetjente muligheder for HMI-innovation: Jeg har rangeret sanserne ved (forskellen mellem deres teoretiske potentiale og dagens chance score). Innovation Gap Vision – allerede dominerende, men efterlader stadig det største hul (AI-drevet kompression, semantik og augmentation). Proprioception - enormt potentiale, men meget svært at fange; låse det op kunne omdanne XR / robotik. Touch – høj udbetaling, hvis elektronisk haptik og berøringsfølelse forbedres. Hørelse – stærk i dag, men stadig underoptimeret (spatial, multimodal, selektiv hørelse AI). Vestibular - kritisk for nedsænkning, men forbliver teknisk vanskeligt. 40 lovende HMI-startups at se på Her er et curated, up-to-date landskab af AI-hardware startups bygning (HMI) for AI-alderen. jeg grupperede dem efter interface-modalitet og markerede Jeg fokuserede på 2024-2025 udviklinger og inkluderede links / citater, så du kan grave dybere hurtigt. new human–machine interfaces form factor, what’s new, and stage Lad os først tage et kig på AI HMI start-up landskabet i form af båndbredde vs erhvervelse vanskeligheder: 1) Stille tale, neural / nerve & mikro-gestus input (ikke-invasiv) Startup Modality & Form Factor What’s new / why it matters Stage / Notes AlterEgo Sub-vocal “silent speech” via cranial/neuromuscular signals; over-ear/behind-head wearable Public debut of for silent dictation & AI querying at “thought-speed”; demos show silent two-way comms & device control. ( ) Silent Sense Axios Newly out of stealth; product details pending. Augmental (MouthPad^ ) Tongue + head-gesture touchpad (roof of mouth) in-mouth Hands-free cursor/clicks; active roadmap on head-tracking & silent-speech; raised seed in late 2023. ( ) MIT News Shipping to early users; assistive & creator workflows. Wearable Devices (Mudra Band / Mudra Link) Neural/EMG-like wristbands (Apple Watch band + cross-platform Link) CES 2025 Innovation Award; Link opens OS-agnostic neural input; dev kit & distribution deals. ( ) CES Public company (WLDS); consumer + XR partners. Doublepoint Micro-gesture recognition from watches/wristbands turns Apple Watch into spatial mouse; eye-tracking + pinch “look-then-tap” UX. ( ) WowMouse TechCrunch App live; SDK for OEMs & XR makers. Wisear Neural interface in earbuds (jaw/eye micro-movements; roadmap to neural) “Neural clicks” for XR/earbuds; first Wisearphones planned; licensing to OEMs. ( ) wisear.io Late-stage prototypes; announced timelines & pilots. Afference (Phantom / Ring) Neural haptics (output!) via fingertip rings/glove stimulating nerves CES award-winner; creates artificial touch without bulky gloves; neural haptics reference ring. ( ) Interesting Engineering Early funding; working with XR & research labs. AlterEgo Sub-vocal “silent speech” via cranial/neuromuscular signals; over-ear/behind-head wearable Public debut of for silent dictation & AI querying at “thought-speed”; demos show silent two-way comms & device control. ( ) Silent Sense Axios Stille sans Aksel Aksel Newly out of stealth; product details pending. Augmental (MouthPad^ ) Tongue + head-gesture touchpad (roof of mouth) in-mouth i munden Hands-free cursor/clicks; active roadmap on head-tracking & silent-speech; raised seed in late 2023. ( ) MIT News Mit nyt Mit nyt Shipping to early users; assistive & creator workflows. Wearable Devices (Mudra Band / Mudra Link) Neural/EMG-like wristbands (Apple Watch band + cross-platform Link) CES 2025 Innovation Award; Link opens OS-agnostic neural input; dev kit & distribution deals. ( ) CES Disse Disse Public company (WLDS); consumer + XR partners. Doublepoint Micro-gesture recognition from watches/wristbands turns Apple Watch into spatial mouse; eye-tracking + pinch “look-then-tap” UX. ( ) WowMouse TechCrunch WooMouse af TechCrunch af TechCrunch App live; SDK for OEMs & XR makers. Wisear Neural interface in earbuds (jaw/eye micro-movements; roadmap to neural) “Neural clicks” for XR/earbuds; first Wisearphones planned; licensing to OEMs. ( ) wisear.io Tønder.dk Tønder.dk Late-stage prototypes; announced timelines & pilots. Afference (Phantom / Ring) Neural haptics (output!) via fingertip rings/glove stimulating nerves CES award-winner; creates artificial touch without bulky gloves; neural haptics reference ring. ( ) Interesting Engineering Interessant ingeniørarbejde Interessant ingeniørarbejde Early funding; working with XR & research labs. 2) Ikke-invasiv neurotech / hverdag BCI wearables Startup Modality & Form Factor What’s new / why it matters Stage / Notes Neurable EEG + AI in headphones (MW75 Neuro line) Commercial “brain-tracking” ANC headphones measuring focus; productivity & health insights. ( ) Master & Dynamic Shipping (US); scaling to EU/UK. OpenBCI (Galea, cEEGrid, Ultracortex) Research-grade biosensing headsets; around-ear EEG kits Galea (EEG/EOG/EMG/EDA) integrates with XR; dev kits for labs & startups. ( ) OpenBCI Shop Hardware available; strong dev ecosystem. EMOTIV EEG headsets & MN8 EEG earbuds Newer consumer & research lines (Insight/EPOC X; MN8 earbuds) used in UX, wellness, research. ( ) EMOTIV ** Mature startup; DTC + enterprise. InteraXon (Muse) EEG headbands; new Muse S “Athena” EEG+fNIRS Adds fNIRS to consumer headband → better focus/sleep metrics & neurofeedback. ( ) Muse: the brain sensing headband Shipping; wellness & performance verticals. Cognixion (ONE) Non-invasive BCI + AR speech headset Uses BCI with flashing visual patterns + AI to speak/control smart home; ALS use-cases. ( ) Cognixion Assistive comms pilots; clinical focus. MindPortal fNIRS-based “telepathic AI” headphones (R&D) Targeting thought-to-AI interfaces with non-invasive optical signals. ( ) mindportal.com Early stage; dev previews & interviews. NexStem EEG headsets + SDK Low-cost BCI kits for devs & research; HMI demos. ( ) nexstem.ai Developer community growing. Raised a seed round in April 2025 Neurable EEG + AI in headphones (MW75 Neuro line) Commercial “brain-tracking” ANC headphones measuring focus; productivity & health insights. ( ) Master & Dynamic Master & Dynamisk Master & Dynamisk Shipping (US); scaling to EU/UK. OpenBCI (Galea, cEEGrid, Ultracortex) Research-grade biosensing headsets; around-ear EEG kits Galea (EEG/EOG/EMG/EDA) integrates with XR; dev kits for labs & startups. ( ) OpenBCI Shop åbne butikker åbne butikker Hardware available; strong dev ecosystem. EMOTIV EEG headsets & MN8 EEG earbuds Newer consumer & research lines (Insight/EPOC X; MN8 earbuds) used in UX, wellness, research. ( ) EMOTIV Følelsesmæssigt Følelsesmæssigt * af Mature startup; DTC + enterprise. InteraXon (Muse) EEG headbands; new Muse S “Athena” EEG+fNIRS Adds fNIRS to consumer headband → better focus/sleep metrics & neurofeedback. ( ) Muse: the brain sensing headband Muse: Hjernens sensorerede hovedbånd Muse: Hjernens sensorerede hovedbånd Shipping; wellness & performance verticals. Cognixion (ONE) Non-invasive BCI + AR speech headset Uses BCI with flashing visual patterns + AI to speak/control smart home; ALS use-cases. ( ) Cognixion Kognition Kognition Assistive comms pilots; clinical focus. MindPortal fNIRS-based “telepathic AI” headphones (R&D) Targeting thought-to-AI interfaces with non-invasive optical signals. ( ) mindportal.com på mindportal.com på mindportal.com Early stage; dev previews & interviews. NexStem EEG headsets + SDK Low-cost BCI kits for devs & research; HMI demos. ( ) nexstem.ai Nørre.dk Nørre.dk Developer community growing. Raised a seed round in April 2025 3) Minimalt invasive & invasive BCI (kliniske først, forbruger senere) Startup Modality & Form Factor What’s new / why it matters Stage / Notes Synchron Endovascular stentrode (via blood vessel → motor cortex) Pairing with NVIDIA? AI to improve decoding; ALS users controlling home devices. ( ) WIRED Human trials; lower surgical burden vs open-brain. Precision Neuroscience Thin-film cortical surface array (~1024 electrodes) “Layer 7” interface sits on cortex w/o penetrating; speech/motor decoding. ( ) WIRED The company received FDA clearance for the device and has implanted it in 37 patients as of April 2025 Paradromics High-bandwidth implant (“Connexus”) First human test (May 14, 2025); compact 420-electrode array aimed at speech/typing. ( ) WIRED Moving toward long-term trials. Neuralink Penetrating micro-electrode implant + robot surgery Large funding; parallel human trials race; long-horizon consumer HMI. ( ) Bioworld Clinical; significant visibility. Blackrock Neurotech Utah-array implants & ecosystems Deep install base in research/clinical BCI. ( ) Tracxn Clinical research leader. acquired by Tether in April 2024 Synchron Endovascular stentrode (via blood vessel → motor cortex) Pairing with NVIDIA? AI to improve decoding; ALS users controlling home devices. ( ) WIRED Være Være Human trials; lower surgical burden vs open-brain. Precision Neuroscience Thin-film cortical surface array (~1024 electrodes) “Layer 7” interface sits on cortex w/o penetrating; speech/motor decoding. ( ) WIRED Være Være The company received FDA clearance for the device and has implanted it in 37 patients as of April 2025 Paradromics High-bandwidth implant (“Connexus”) First human test (May 14, 2025); compact 420-electrode array aimed at speech/typing. ( ) WIRED Være Være Moving toward long-term trials. Neuralink Penetrating micro-electrode implant + robot surgery Large funding; parallel human trials race; long-horizon consumer HMI. ( ) Bioworld Biologisk verden Biologisk verden Clinical; significant visibility. Blackrock Neurotech Utah-array implants & ecosystems Deep install base in research/clinical BCI. ( ) Tracxn Træk Træk Clinical research leader. acquired by Tether in April 2024 4) AR briller, AI wearables & rumlige computere (nye UX canvases) Startup Device What’s new / why it matters Stage / Notes Brilliant Labs (Frame/Halo) Open smart glasses + cloud AI agent Open hardware/software for devs; lightweight daily-use AR + AI. ( ) Forbes Shipping early units; active community. Rokid Light AR/AI glasses New glasses at IFA 2025: on-glasses AI, dual micro-LED displays; live translation, nav, GPT. ( ) Tom's Guide New model announced; consumer price point. Sightful (Spacetop) Screenless laptop + AR workspace Spacetop G1 (and Windows variant) → private, portable 100" desktop; AR productivity UX. ( ) WIRED Preorders / rolling availability. Limitless (Pendant) Wearable voice lifelogger + AI memory Records/organizes your day; context memory for assistant; Android app rolling out. ( ) Limitless Actively shipping units for iOS and has an Android app planned for late 2025. Rabbit (R1) Pocket AI device (LAM-driven) Major RabbitOS 2 UX overhaul; generative UI & new actions after rocky launch. ( ) 9to5Google Over 130K devices shipped, but DAU hover around 5,000 as of August 2025 Humane (Ai Pin) Projector pin wearable Cautionary tale—service shutdown & HP acquisition (illustrates pitfalls of new AI UX). ( ) WIRED Humane ceased sales of the Ai Pin in February 2025 and sold most of its assets to HP. The service for the Ai Pin was also shut down. Brilliant Labs (Frame/Halo) Open smart glasses + cloud AI agent Open hardware/software for devs; lightweight daily-use AR + AI. ( ) Forbes af Forbes af Forbes Shipping early units; active community. Rokid Light AR/AI glasses New glasses at IFA 2025: on-glasses AI, dual micro-LED displays; live translation, nav, GPT. ( ) Tom's Guide Toms vejledning Toms vejledning New model announced; consumer price point. Sightful (Spacetop) Screenless laptop + AR workspace Spacetop G1 (and Windows variant) → private, portable 100" desktop; AR productivity UX. ( ) WIRED Være Være Preorders / rolling availability. Limitless (Pendant) Wearable voice lifelogger + AI memory Records/organizes your day; context memory for assistant; Android app rolling out. ( ) Limitless Ubegrænset Ubegrænset Actively shipping units for iOS and has an Android app planned for late 2025. Rabbit (R1) Pocket AI device (LAM-driven) Major RabbitOS 2 UX overhaul; generative UI & new actions after rocky launch. ( ) 9to5Google 9 til 5 Google 9 til 5 Google Over 130K devices shipped, but DAU hover around 5,000 as of August 2025 Humane (Ai Pin) Projector pin wearable Cautionary tale—service shutdown & HP acquisition (illustrates pitfalls of new AI UX). ( ) WIRED Være Være Humane ceased sales of the Ai Pin in February 2025 and sold most of its assets to HP. The service for the Ai Pin was also shut down. Eye-, face- og driver-state sensing (affect-aware, context-aware UX) Startup Focus Why it matters Smart Eye (Affectiva/iMotions) Eye/face/driver monitoring & interior sensing Automotive-grade attention & affect → safety & adaptive interfaces. ( ) UploadVR uSens Gesture & 3D HCI tracking (AR/VR, auto) Vision-based hand/pose tracking at edge for XR & mobile. ( ) UploadVR Smart Eye (Affectiva/iMotions) Eye/face/driver monitoring & interior sensing Automotive-grade attention & affect → safety & adaptive interfaces. ( ) UploadVR Upload af Upload af uSens Gesture & 3D HCI tracking (AR/VR, auto) Vision-based hand/pose tracking at edge for XR & mobile. ( ) UploadVR Upload af Upload af 6) Hurtig Watchlist (udviklende / tilstødende) Wispr - software-første, men udtrykkeligt pitching en stemme-native interface til AI-æraen; rejst til at opbygge "tastatur erstatning" med AI redigering. (God bellwether for voice-as-primary UX.) (Wispr Flow) MindPortal – fNIRS “thought-to-AI” hovedtelefoner; tidligt men bemærkelsesværdigt. (mindportal.com) Ultraleap / Leap Motion arv — hånd-tracking pivot; signaler konsolidering i kategorien. (UploadVR) HaptX - industrielle mikrofluidiske haptikhandsker; træningsrobotter / AI med rige menneskelige demonstrationer. (HaptX) Vejen fremad Denne artikel udforsker skiftet fra traditionelle skærmbaserede HMI'er til multi-sensoriske interaktioner. Den fremhæver fremskridt fra Meta, Apple, Google og OpenAI, sammen med lektioner fra tidligere eksperimenter som Human Pin og Rabbit R1. En detaljeret analyse af menneskelige sanser overvejer informationsoverførsel, latens, erhvervsproblemer og HMI's betydning. "HMI Opportunity Map" identificerer innovationshuller i under-digitaliserede men afgørende sanser som touch og proprioception. Når du bevæger dig gennem disse nye grænser, skal du overveje, hvordan dit eget arbejde, forskning eller investeringer kan bidrage til at skabe mere intuitive, etiske og virkelig menneskecentriske AI-interaktioner.Vi opfordrer dig til at udforske de fremhævede startups, dykke ned i den nævnte forskning og aktivt engagere sig i den igangværende dialog om at forme en fremtid, hvor teknologi problemfrit øger menneskelige evner i alle sanser.