paint-brush
A Tale of Two LLMs: Open Source vs the US Military's LLM Trialsby@salkimmich
972 reads
972 reads

A Tale of Two LLMs: Open Source vs the US Military's LLM Trials

by Sal KimmichJuly 10th, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Large Language Models (LLMs) have become prominent in the world of artificial intelligence. LLMs are trained on vast amounts of internet data, enabling them to generate human-like responses to user prompts. This article explores the security posture of open-source LLM projects and the US military's trials of classified LLMs.
featured image - A Tale of Two LLMs: Open Source vs the US Military's LLM Trials
Sal Kimmich HackerNoon profile picture

In the world of artificial intelligence (AI), Large Language Models (LLMs) have become prominent. LLMs are trained on vast amounts of internet data, enabling them to generate human-like responses to user prompts. This article explores two different applications of LLMs: the security posture of open-source LLM projects and the US military's trials of classified LLMs. It is important to understand the basic concepts of AI, LLMs, and open source to grasp the significance of these developments. We covered some of the details of that research from our initial discussion on Security Threats to High Impact Open Source Large Language Models

To really understand what this is about, we'll need to think about the idea of a threat model in cybersecurity.

Threat Modeling is Great, When Done Right

A good threat model undergoes the continuous process of using "hypothetical scenarios, system diagrams, and testing to help secure systems and data. By identifying vulnerabilities, helping with risk assessment, and suggesting corrective action, threat modeling helps improve cybersecurity and trust in key business systems." Cisco, What is Threat Modeling?

There's great work from cybersecurity professionals volunteering to help open source get more secure, and any business aiming at keeping their ISO Security Certification knows what I’m talking about already. 

Threat models are Asset Driven: These Assets are Different

Here’s what you have to understand: the threat models for LLMs, and for generative code in general, are going to look a lot different. Now the threats are not just traditional surface area, they can be internal to the most valuable asset of the tech stack: the LLM itself.  

Threat models can, and should be, widely different even when considering the same technology. A rational threat model considers the likelihood of a motivated actor against the value of the asset (how much does someone want to hack it, and how easy is it for them to actually hack into it, and is it worth making them just think they are hacking into it with a honeypot

Chatbots in a Honeypot World might sound like poetic hyperbole, but it’s actually cutting edge research in this space. Go read it after this.

All LLMs will struggle with some fundamental security concerns, but Open Source LLMs are going to face unique challenges, as listed in the OWASP Top 10 vulnerabilities for Large Language Model Applications. This is a good thing, there’s security work we can do to fix it. Now, let’s dive a bit deeper into what considerations we need for this kind of threat model:


Contrasting Security Postures:

Open-source LLM projects have revolutionized digital content creation but raise concerns about security risks and vulnerabilities. These projects exhibit an immature security posture, emphasizing the need for enhanced security standards and practices. The popularity of LLMs in open-source projects makes them attractive targets for attackers, highlighting the urgency for improved security measures. It is crucial to prioritize security considerations when choosing software solutions involving LLMs. For more on this, see Security Threats to High Open Source Impact Large Language Models. 

The contrasting worlds of open-source LLM projects and the US military's trials with classified LLMs demonstrate the evolving landscape of generative AI. Open-source LLM projects require improved security measures to mitigate risks, while the military's adoption of LLMs represents a departure from its cautious nature.

The potential benefits of LLMs in military operations are significant, but challenges related to bias, misinformation, and security must be carefully addressed. This offers us a very unique opportunity to understand how similar technologies can have massively different threat surfaces, and different ways of protecting them. 

The US Military's Trials with Classified LLMs: 

The US military, known for its cautious approach to new technology, has surprised observers by swiftly adopting generative AI, including LLMs. The military is conducting trials with classified LLMs as part of broader Defense Department experiments focused on data integration and digital platforms. The specific LLMs being tested remain undisclosed, but startups like Scale AI are among the platforms being evaluated.

Benefits of LLMs for the Military: 

LLMs offer the potential to transform military operations by enabling faster data processing and decision-making. These trials aim to develop AI-enabled capabilities for military planning, sensor analysis, and firepower decisions. LLMs have shown promise in completing tasks that traditionally took hours or even days in a matter of minutes, utilizing secret-level data. That’s honestly pretty cool. 

Considerations and Challenges: 

While LLMs hold great promise, there are important considerations and challenges to address. Generative AI can compound bias and relay incorrect information with confidence. It is vulnerable to hacking and data poisoning, raising concerns about the reliability of AI-enabled systems. The US military is aware of these challenges and is working with tech security companies to evaluate and test the trustworthiness of AI-enabled systems.

Importance of Security Measures: 

Both open-source LLM projects and classified LLMs used by the military highlight the importance of robust security measures. Open-source LLM projects currently demonstrate poor security posture, emphasizing the need for enhanced security standards. The military's trials with classified LLMs necessitate rigorous security protocols to protect sensitive information and ensure reliable decision-making. It’s a crazy time to be alive. 

Looking Forward: 

There’s no right and wrong way to explore emerging technologies. I know that might sound controversial, but it’s true. This doesn’t mean we shy away from the practical and moral responsibility of building good security models. It’s about showing up to help AI developers stay secure while they are changing our day-to-day world at lightning speed in the fields of medicine, physics, and generative art. It’s just work that’s got to get done, and on that note, I’ll leave my favorite motto from a Data on Kubernetes meetup a few months ago: 

“We are here not because it is easy
But because we thought it would be easy
And now we are stuck doing it"