AI Industries Converge: Llama 3 and Electric Atlas Have More In Common Than You Thinkby@kseniase
139 reads

AI Industries Converge: Llama 3 and Electric Atlas Have More In Common Than You Think

by Ksenia SeApril 25th, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Recently, Meta released Llama 3, an advanced language model, alongside Boston Dynamics' introduction of a new electric Atlas robot. These developments, while different, are connected through shared AI advancements that enhance both fields. Llama 3's progress in AI extends into robotics, influencing areas like motion planning and control. This convergence could lead to improved robot capabilities and broader access to sophisticated AI, merging software and hardware innovations and subtly enhancing applications in daily tasks with AI-integrated robots.
featured image - AI Industries Converge: Llama 3 and Electric Atlas Have More In Common Than You Think
Ksenia Se HackerNoon profile picture

Last week, two remarkable events occurred: Meta announced its Llama 3, the most capable and highly anticipated open large language model (LLM) to date, and Boston Dynamics introduced its new fully electric Atlas robot platform — a stark departure from their hydraulic robots! At first glance, these developments may seem unrelated, but they are actually deeply interconnected, with the potential to synergistically drive each other forward and reshape how we work with and implement AI.

At the heart of this connection is the transformative power of advanced AI. Breakthroughs in natural language processing and machine learning, exemplified by Llama 3, extend beyond language alone. These techniques, from deep neural networks to reinforcement learning, are also propelling significant advancements in computer vision, motion planning, and robot control. As language models like Llama 3 continue to expand the boundaries of AI’s ability to understand and interact with the world, they also lay the groundwork for more intelligent and capable robots.

The implications, of course, extend beyond communication. The same techniques used to train LLMs on vast amounts of text data are also applicable to learning from massive datasets of sensor readings, images, and simulations for robots. Moreover, the open-source nature of models like Llama 3 democratizes access to state-of-the-art AI, enabling a broader spectrum of researchers and companies to integrate these capabilities into their robotic systems. At least start playing on that field!

Every successful (and unsuccessful) attempt by Atlas to manipulate an object, navigate a cluttered factory floor, or assist a human worker becomes a valuable data point for the model, just as it does for Llama 3. Using this innovative model, Meta developed a standalone app,, and embedded it into WhatsApp, Instagram, and Facebook, collecting vast and highly personalized data. This could lead us toward synthetic social networks or — which is more likely — an extremely personalized, embodied AI experience. As Llama 3 and the electric Atlas converge, they may accelerate each other’s development, blurring the lines between software and hardware, bits and atoms. As language models become more proficient at understanding and interacting with the world, robots will become more capable of applying that knowledge.

This doesn’t necessarily mean evil. More capable and advanced robots, which can be fine-tuned and improved with your personal data and specifics, could take over mundane tasks at home. Imagine, instead of hundreds of text editors, finally having a robot that can load and unload the dishwasher and do the laundry.

Currently, launches like Llama 3 are seen as enhancing AI’s understanding and processing capabilities, but in the long term, they will be one of the milestones in building and deploying machines that are finely attuned and aligned with us to assist in our daily lives.

Other Impressive Models:

(I didn’t send you the FOD digest last Monday because I was at a conference dedicated to citizen diplomacy. Therefore, today, we have an extensive list of recently launched models that are worth checking out, along with other relevant research papers.)

  • Mixtral-8x22B: Mistral AI introduced a scalable sparse mixture of expert models optimizing cost and latency by selectively using parameters during inference, offering a high capacity and efficiency model for further training and applications →read the paper
  • Rerank 3: Launched by Cohere, this model enhances enterprise search and Retrieval Augmented Generation systems, improving accuracy in document retrieval across multiple languages and data formats →read the paper
  • Idefics2: Hugging Face's model that excels in integrating text and image data, significantly improving on OCR and visual question-answering tasks →read the paper
  • Reka Core, Flash, and Edge: A series of multimodal language models from Reka that process text, images, video, and audio, demonstrating high performance across diverse tasks →read the paper
  • Ferret-UI: Developed by Apple, this model specializes in mobile UI interaction, enhancing user experience by accurately performing tasks tailored to the unique properties of UI screens →ead the paper
  • Zamba: Zyphra's compact and efficient SSM Hybrid model is designed for performance with reduced training data needs, optimized for consumer hardware →read the paper
  • YaART: Yandex's advanced text-to-image diffusion model that optimizes training efficiency and quality with smaller, high-quality datasets →read the paper
  • RHO-1: A novel approach by Xiamen University focusing on Selective Language Modeling to enhance efficiency by prioritizing useful tokens during training →read the paper
  • RecurrentGemma: Google DeepMind's model that moves past traditional transformers by incorporating recurrences for more efficient long-sequence processing →read the paper
  • JetMoE: An economical LLM from the MIT-IBM Watson AI Lab using a mixture-of-experts architecture to achieve high performance at reduced costs →read the paper

News from The Usual Suspects ©


  • open-sourced Gemini Cookbook
  • and is merging its Research, DeepMind, and Responsible AI teams to accelerate its “capacity to deliver capable AI
  • DeepMind published an article about the ethics of advanced AI assistants, stressing the significance of their integration into daily life

Hugging Face


  • published the 2024 AI Index report, reflecting the escalating impact of AI on society. It contains data with new estimates on AI training costs and detailed insights into the responsible AI landscape. It also introduces a new chapter on AI's influence on science and medicine. Highlights include the substantial cost of training state-of-the-art models, such as GPT-4 and Gemini Ultra; the dominance of the U.S. in producing top AI models; and significant investment growth in generative AI despite overall funding declines. Additionally, the report notes a major increase in AI-related regulations in the U.S. and heightened public awareness and nervousness about AI's future impact.

Enjoyed This Story?

I write a weekly analysis of the AI world in the Turing Post newsletter. We aim to equip you with comprehensive knowledge and historical insights so you can make informed decisions about AI and ML.