Something changed quietly over the past few years, and it wasn’t technology. It was how people make a living. The hustle economy didn’t arrive as a movement. It simply became normal. Freelancing turned routine. Side income became a form of insurance. Reselling, content work, and small solo businesses slipped into everyday life. Even people with stable jobs began behaving as if stability were provisional. That shift explains more about today’s consumer tech landscape than any keynote ever could. It helps explain why certain brands surged while others stalled — and why the most important businesses of the next cycle may never become household names. It was how people make a living. That shift explains more about today’s consumer tech landscape than any keynote ever could. It helps explain why certain brands surged while others stalled — and why the most important businesses of the next cycle may never become household names. Over the past few years, two of the strongest-performing consumer tech categories have been hiding in plain sight. Cameras. Desktop manufacturing (maker tools). Both old enough to be dismissed as saturated, but now quietly producing a lineup of breakout companies from Insta360 to Bambu Lab to Snapmaker — the latter having recently secured funding from Xiaomi founder Lei Jun’s Shunwei Capital following a record breaking Kickstarter campaign that raised over $20 million. Over the past few years, two of the strongest-performing consumer tech categories have been hiding in plain sight. Cameras. Desktop manufacturing (maker tools). But what I’m watching at CES isn’t what spec bumps these companies roll out. It’s whether the show reflects what’s actually powering their momentum now. The differentiator isn’t raw hardware anymore — it’s software, and I’m curious to see which brands are reorganizing around it. But what I’m watching at CES isn’t what spec bumps these companies roll out. It’s whether the show reflects what’s actually powering their momentum now. The differentiator isn’t raw hardware anymore — it’s software, and I’m curious to see which brands are reorganizing around it. Cameras: Optimized For An Edit-First World Traditional cameras were built for a time when a great raw shot was enough. Get the framing right, nail exposure, capture the moment, move on. If the footage looked good out of the camera, it did its job. That logic belongs to a pre-edit world. That logic belongs to a pre-edit world. That logic belongs to a pre-edit world. With the rise of influencers and small e-commerce operators, footage is expected to work harder than that. It gets sliced, stretched, reframed, and reused across platforms that didn’t exist when the record button was invented. The real catastrophe isn’t an imperfect shot — it’s discovering later that the moment you needed never made it into the file at all. That means the most important change in cameras right now has very little to do with the size of your sensor.\For the ones pushing cameras the hardest, the goal has become coverage. Capture wide. Track relentlessly. Don’t lose the subject. Preserve options. Because in an edit-first culture, everything can be fixed later except absence. The real catastrophe isn’t an imperfect shot — it’s discovering later that the moment you needed never made it into the file at all. That means the most important change in cameras right now has very little to do with the size of your sensor.\For the ones pushing cameras the hardest, the goal has become coverage. Because in an edit-first culture, everything can be fixed later except absence. That changes what a camera is optimized for.**Instead of assuming a single decisive moment, newer systems are tuned to reduce regret.**They prioritize continuous coverage over perfect framing, persistence over precision, and systems that operate without constant human babysitting. That’s why newcomers like OBSBOT are gaining traction. They haven’t reinvented optics, but they have redesigned around this new set of operating assumptions. That changes what a camera is optimized for. What I’m Watching For At CES What I’m Watching For At CES CES will be full of cameras promising sharper images. Necessary, expected. But I’m keeping tabs on upstream innovation — in where processing runs, how footage arrives already organized, and how much work gets eliminated before editing even begins What’s changing technically isn’t a single breakthrough, but rather how responsibility gets distributed across the system. What’s changing technically isn’t a single breakthrough, but rather how responsibility gets distributed across the system. The Architecture Behind Edit-First Cameras The Architecture Behind Edit-First Cameras More computation is moving into capture itself. Subject tracking, stabilization, horizon management, exposure continuity — these aren’t features so much as safeguards. They exist because certain mistakes are unrecoverable once the sensor stops recording. If framing collapses mid-shot or tracking drifts at the wrong moment, no amount of post-processing restores what was never there. On-device computation replaces constant human correction with predictable behavior, tuned for continuity rather than perfection. More computation is moving into capture itself. Once footage lands, a different priority takes over: speed of decision-making. Phones and laptops have become the first layer of triage, not because they’re ideal editing environments, but because they’re immediate. Reframes, format changes, quick exports — this is where footage gets validated as usable or not. Cameras that move cleanly into these workflows scale faster because they compress time-to-output. The goal isn’t creative depth; it’s reducing the delay between capture and something that can be published, shared, or sold. Once footage lands, a different priority takes over: speed of decision-making. The cloud solves a slower, heavier problem: memory at scale. Long recordings, stitched footage, 360 video, object indexing, delayed reuse — this is work that benefits from persistence more than immediacy. It’s where footage stops behaving like a folder of clips and starts behaving like a dataset. This is the bet companies like Insta360 are making: that the real value isn’t just in capturing more, but in being able to return to footage later and extract something new without starting from scratch. The cloud solves a slower, heavier problem: memory at scale. AI threads through all of this, but its role is narrower — and more practical — than the hype suggests. The useful applications aren’t about editing for you. They’re about structure. Identifying subjects, tracking motion, flagging moments, grouping related clips. In other words, reducing the cost of finding value later. AI doesn’t make creative decisions. It lowers the friction required to reach them. AI threads through all of this, but its role is narrower — and more practical — than the hype suggests. The useful applications aren’t about editing for you. They’re about structure. Identifying subjects, tracking motion, flagging moments, grouping related clips. In other words, reducing the cost of finding value later. AI doesn’t make creative decisions. It lowers the friction required to reach them. Put together, modern cameras are starting to behave less like recording devices and more like intake systems, designed to prevent loss, compress turnaround, and preserve optionality across time, formats, and platforms. That’s also where the economics show up. The metric that matters isn’t resolution or dynamic range, but yield. How much of what you shoot survives into something usable. How often you have to reshoot. How much time gets burned fixing problems that didn’t need to happen. Put together, modern cameras are starting to behave less like recording devices and more like intake systems The metric that matters isn’t resolution or dynamic range, but yield. This shift is easy to miss on the CES floor, but the companies building in this direction are the ones most likely to turn engineering into repeatable, boring, very real revenue. **Maker Tools: The Most Unlikely Place AI Matters \ Maker tools will keep booming in 2026 for a boring reason that doesn’t photograph well: supply and demand for personalization finally match. Demand is obvious. Personalization isn’t a nice-to-have anymore. What’s new is that supply has caught up enough to make customization economically viable at scale without needing to badger factories. supply and demand for personalization finally match. What’s new is that supply has caught up enough to make customization economically viable at scale without needing to badger factories. The pandemic helped, but not in the sentimental “everyone became a maker” way. It changed time, attention, and habits. People got bored, then curious, then competent. They discovered—often through social feeds—how much you can do with a 3D printer, a laser, or a small CNC, and how quickly you can iterate when you’re not waiting on a manufacturer in another time zone.Companies like XTool and Bambu Lab further helped normalize desktop manufacturing by doing two things at once: shipping machines that looked and felt next-gen while relentlessly flooding social feeds with influencers using them to make things you wouldn’t expect these tools to handle. The hardware earned credibility, the content reframed expectations, and fabrication stopped looking like a niche garage pastime and more like a contemporary way to make real products. The clearest indication that desktop fabrication still has room to grow didn’t come from a hardware launch at all. It came from OpenAI, with their announcement in September that users could directly purchase on Etsy through ChatGPT. It was an acknowledgment that individual creators are a sideshow, but first-class participants —a platform doesn’t redesign flows around edge cases. The clearest indication that desktop fabrication still has room to grow didn’t come from a hardware launch at all. It came from OpenAI, with their announcement in September that users could directly purchase on Etsy through ChatGPT. It was an acknowledgment that individual creators are a sideshow, but first-class participants —a platform doesn’t redesign flows around edge cases. What I’m Watching For At CES What I’m Watching For At CES What I’m Watching For At CES A flood of new machines will demo at CES, because the obvious gaps are still there. Finishing tools that remove manual labor. Hybrid systems that collapse steps instead of forcing makers to shuttle parts between machines. Better joining, assembly, coloring, and surface treatment workflows. Once objects are meant to be sold, not just shown, rough edges stop being character. But what I’m on the lookout for at CES isn’t desktop manufacturing getting faster, cheaper, or more diverse. What matters now is whether anyone has figured out that the real bottleneck sits outside the machine: in software. This is where AI actually earns its keep. What matters now is whether anyone has figured out that the real bottleneck sits outside the machine: in software. This is where AI actually earns its keep. Fast Fashion’s Model, Minus The Clothes Fast Fashion’s Model, Minus The Clothes For most people using these gadgets, execution isn’t the hard part. The harder question comes earlier: what’s worth making in the first place, how to differentiate it, and whether the idea justifies time, material, and machine hours. That uncertainty is where most of the waste lives. Desktop manufacturing brands, for the most part, still treat this as someone else’s problem. Desktop manufacturing brands, for the most part, still treat this as someone else’s problem. Desktop manufacturing brands, for the most part, still treat this as someone else’s problem. Every modern printer, laser, or CNC already runs through an app. ***That app could be doing far more than pushing files and reporting temperatures. It could surface reality: what products are selling now, which colors and forms are becoming saturated, which variations keep reappearing because demand is pulling them forward.***Some of that signal comes from marketplaces, some from social platforms, and some from boring but revealing details like repeated buyer questions and listing revisions. Right now, makers assemble this context manually — scrolling, guessing, screenshotting, trusting instinct. AI could collapse that distance, not by telling people what to make or replacing taste, but by making it harder to make decisions in the dark. Right now, makers assemble this context manually — scrolling, guessing, screenshotting, trusting instinct. AI could collapse that distance, not by telling people what to make or replacing taste, but by making it harder to make decisions in the dark. The same blind spot shows up after an object exists. Turning something into a product is still slow, repetitive work: writing listings, managing variants, adjusting descriptions, responding to the same questions over and over. It’s not creative labor, but it directly affects whether something sells. Turning Collective Experience Into a Growth Engine The most consequential impact AI has in desktop manufacturing is that it changes how the category grows. The most consequential impact AI has in desktop manufacturing is that it changes how the category grows. Until now, growth has been constrained by hardware tradeoffs. Machines have to assume a user. Design them to be powerful and flexible, and beginners burn through material learning the basics. Design them to be forgiving, and experienced users quickly run into ceilings. If a company can accumulate experience across its entire installed base — thousands of machines, jobs, failures, fixes — and embed that knowledge into its appbefore production begins, two things happen at once. The barrier to entry drops, because new users avoid the most common mistakes by default. And yield improves for experienced users, because fewer runs turn into avoidable losses.(Sidebar: This knowledge doesn’t come from the machine “understanding” success or failure in a human sense, but from signals: jobs that get canceled and rerun, settings that are repeatedly overridden, parts that are reprinted with thicker walls or different materials, files that are finished but never reused or exported. At scale, those behaviors stop looking like noise and start looking like evidence.) before The barrier to entry drops, because new users avoid the most common mistakes by default. And yield improves for experienced users, because fewer runs turn into avoidable losses. The effect compounds. More users generate more data. More data sharpens recommendations. Better recommendations increase success rates. Higher success rates attract more users. **That feedback loop doesn’t just improve outcomes — it makes the platform harder to replace. Desktop manufacturing brands have historically lived and died by hardware cycles. Systems like this introduce software dynamics instead: compounding advantage, rising switching costs, and growth that accelerates with usage rather than resets. Once those dynamics are in place, it’s very hard to compete against. Desktop manufacturing brands have historically lived and died by hardware cycles. Systems like this introduce software dynamics instead: compounding advantage, rising switching costs, and growth that accelerates with usage rather than resets. Once those dynamics are in place, it’s very hard to compete against. Final Thoughts Final Thoughts You can’t write about CES without acknowledging the big elephants wandering the Vegas floor for the past couple years: AI wearables. This is the category at CES with the most ambition, and the least clarity. You can’t write about CES without acknowledging the big elephants wandering the Vegas floor for the past couple years: AI wearables. It’s not that people don’t want AI. But most think they already have it. GPT is on every phone. Alexa lives at home. When an AI wearable can’t point to a concrete task it performs better because it’s worn, it reads as redundant rather than revolutionary. It’s not that people don’t want AI. But most think they already have it. GPT is on every phone. Alexa lives at home. When an AI wearable can’t point to a concrete task it performs better because it’s worn, it reads as redundant rather than revolutionary. Utility beats vision. Context beats capability. That’s the lesson these two “dinosaur” categories have already absorbed — Insta360 and Bambu Lab became rising stars by specializing into niche, concrete, tangible pain. If the AI wearables category is going to break through, it needs to become narrower, by latching onto a single, high-frequency job where proximity to the user matters — and where removing the device would immediately make something harder, slower, or more expensive. Utility beats vision. Context beats capability. proximity to the user I’m hoping there’s a couple AI wearable startups that have already realized this, but it may take until CES 2027 for an AI wearable that doesn’t arrive asking for a reason to exist. but it may take until CES 2027 for an AI wearable that doesn’t arrive asking for a reason to exist but it may take until CES 2027 for an AI wearable that doesn’t arrive asking for a reason to exist