paint-brush
How Did AI Fare in 2024? Humane, Rabbit R1, and Moreby@geekonrecord

How Did AI Fare in 2024? Humane, Rabbit R1, and More

by Geek on recordJanuary 9th, 2025
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

The year of Artificial Intelligence (AI) is drawing to a close. This year has seen a number of notable developments in the field. The Humane Ai Pin and Rabbit R1 were two of the first to package AI in a device.
featured image - How Did AI Fare in 2024? Humane, Rabbit R1, and More
Geek on record HackerNoon profile picture

2024 is drawing to a close, marking itself as one of the most influential and significant years in Artificial Intelligence (AI) history. Let’s take a moment to recap the changes that have defined this year, when AI transcended from a buzzword to become a product in itself.


We’ve dealt with failed gadgets, like the Humane Ai Pin. This wearable integrated a camera, a projector and a speaker into a magnetic pin. The gadget replicated some smartphone functionalities, and its sleek design offered a certain wow factor. But it did it all in the worst ways possible, forgetting usability and affordability.


The pin easily overheated and required a significant monetary investment to operate ($499 for the device and $24 + taxes per month), adding little to no user value or convenience. Interacting with its projector was more of a gimmick, significantly slower than using any smartphone’s screen.


Humane’s popularity took a nosedive when tech reviewers destroyed the device in their reviews. The device didn’t live up to the promises its creators had made. This was one of the first attempts of the year to make AI a standalone device, and it was an embarrassing one.


Ai Pin interface projected on a hand


The Rabbit R1 was the second noteworthy product to try and package AI in a standalone device. It also seemed to learn from Humane’s Ai Pin biggest issues: using a touch screen instead of a projector, having a lower price ($199) and no subscription model.


But the R1 also had big issues: at launch, it didn’t do most of the things that its creators had promised (like being able to interact with pretty much any service thanks to its self-learning “Large Action Model”). Instead, the R1 felt as limited as any other GenAI app on your smartphone (think ChatGPT or Gemini).


In fact, it didn’t take a long time for someone to reverse engineer the R1 and extract an app package that could run on an Android phone. Its creators intended the R1 to be a companion device, but the truth is that it was no better than any smartphone with ChatGPT, Gemini, and Google Lens installed.


Rabbit R1 with its bold orange color


As startups focused on creating AI gadgets like the Humane Ai Pin and Rabbit R1, big tech companies were making strides in generative AI advancements.


Personal computing devices like smartphones and laptops are leveraging AI to tackle everyday tasks: crafting professional emails, generating creative birthday invitations, or seamlessly removing unwanted objects from family photos.


AI must now be part of any tech launch if a company wants to keep its investors interested. Microsoft launched its Copilot+ PCs, Google released the Pixel 9 lineup with Gemini, and Apple debuted its Apple Intelligence through the latest iPhone 16.


OpenAI stole the show with its GPT-4o demonstration, one of the first models to enable real-time conversations, where users can interrupt freely. Google joined this field with Gemini Live, and its Project Astra demo, enabling a multimodal understanding of the world around us. These Large Language Models (LLMs) can use audio, images and other inputs to create advanced AI agents capable of having complex interactions.


These advancements in AI capabilities are paving the way for more sophisticated AI companions, such as smart glasses. Meta is leading the way in this space with its Ray-Ban Smart Glasses, which integrate Meta AI. Explorations like the Meta Orion augmented-reality glasses showed an exciting future of portable computing.


While retail products like Orion or Astra might still be years away, these research projects demonstrated that there is interest in smart companion devices that complement our smartphones.


Meta Orion AR glasses, with its computer puck and its wrist tracker


2024 also showed that the reach of AI is not limited to smartphone applications. Self-driving technology improved significantly, enabling more automakers to include this advancement in their fleet. Announcements like Tesla’s Robotaxi added fanfare, even though we won’t likely see any of those cars come to life for years to come.


Perhaps, some of the biggest improvements on AI were seen in the video generation space. Long gone are the days of people with six fingers, or randomly appearing objects.


OpenAI’s Sora allows creators to generate impressive videos of up to 60 seconds with simple text prompts. Sora is capable of creating videos with complex camera motions that would have required hours to animate and produce. Google followed suit with Veo, which can also take long and descriptive text prompts to create highly detailed scenes.

Audio generation is not falling behind, and tools like Suno make it extremely easy for anyone to become a music producer. As Suno said in their About page, they want a “future where anyone can make great music. No instrument needed, just imagination.


This has been the year when we definitely stopped being able to believe anything that we hear, see or read online. Any barriers that once existed to generate media content have fully disappeared, and it’s now easier than ever to create anything that we can think of, thanks to AI.


2024 has served to push the boundaries on what’s possible with AI, and this is just the beginning. With 2025 on the horizon, we should anticipate deeper software integrations, probably some unimaginable new use cases, and the creation of more content indistinguishable from human output. Where is the limit in the use of AI? In fact, is there a limit?



Did you like this article? Subscribe to get new posts by email.