paint-brush
Securing AI: Concerns & Immune Systems for Emerging Technologiesby@salkimmich
238 reads

Securing AI: Concerns & Immune Systems for Emerging Technologies

by Sal KimmichJuly 11th, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Artificial Intelligence is an emerging technology, with significantly more end users than the usual technoshift. This is one of the fastest growing technologies without a mature bedrock of security and reliability support. In this series, we will explore the comprehensive nature of AI, focusing on the critical role of cybersecurity and the importance of holistic treatment for addressing security concerns.
featured image - Securing AI: Concerns & Immune Systems for Emerging Technologies
Sal Kimmich HackerNoon profile picture


Why We Need to Think About the Full Stack of AI

Artificial Intelligence is an emerging technology, with significantly more end users than the usual techno-shift. As discussed in both Security Threats to High Open Source Impact Large Language Models and A Tale of Two LLMs: Open Source vs the US Military's LLM Trials, this is one of the fastest growing technologies without a mature bedrock of security and reliability support.


In this, we’ll talk about the larger scope of understanding and protecting artificial intelligence pipelines, whether it’s a large language or a large anything model.


With models like ChatGPT representing its cognitive capabilities in the way the end user engages with them, a lot of people talk about AI think of it like a magical thinking head, this bot that can talk, but not quite walk.


It Gets More Interesting, The More You Know

To ensure the overall health and well-being of AI, we must recognize that it requires more than just a brain.


Like the human body, AI relies on multiple interconnected components. Just as our lungs provide vital oxygen to sustain life, the carbon consumption of AI represents its energy source. Equally crucial is AI's immune system—cybersecurity—protecting against threats and vulnerabilities. In this series, we will explore the comprehensive nature of AI, focusing on the critical role of cybersecurity and the importance of holistic treatment for addressing security concerns.


The AI Ecosystem: Beyond the Cognitive "Talking Head"

AI, with models like ChatGPT at its forefront, possesses remarkable cognitive abilities. These models excel in generating human-like responses and assisting with various tasks. However, AI is not solely reliant on its cognitive prowess. To function effectively, AI requires a stable and secure ecosystem that sustains its operations. This ecosystem includes data sources, computational infrastructure, energy consumption, and cybersecurity measures.


Carbon Consumption: AI's Lungs

Similar to how our lungs enable the exchange of oxygen and carbon dioxide, the carbon consumption of AI represents its energy source. As AI continues to evolve and become more prevalent, addressing its energy demands and environmental impact becomes crucial. Striving for energy efficiency and sustainable practices is essential for the long-term viability of AI technology. “A main problem to tackle in reducing AI’s climate impact is to quantify its energy consumption and carbon emission, and to make this information transparent”, says Payal Dhar in a recent Nature Machine Intelligence journal article.


To be clear, it’s not always about making the technology more efficient. Sometimes, it’s about making humans less stupid about wasting compute. Shifting developer mindsets to massive state-space feature engineering means getting people to understand why they need to use HyperLogLog (guesstimate your feature space before you compute it, and how many different types of hay are in it). It’s why you have to statistically consider Goodheart’s Law (how you want to investigate that haystack with best-of-n sampling, helping to decide if executing with the compute resources you have/want to spend is worth the byte flips).


Cybersecurity: AI's Immune System

Just as our immune system protects our body from infections, AI relies on robust cybersecurity measures to defend against threats. The interconnectedness of AI systems with various networks and data sources exposes them to vulnerabilities and potential attacks. Ensuring the security of AI requires comprehensive protection mechanisms, including data privacy, secure infrastructure, and resilient defenses against malicious actors.


AI Security Groups and Resources

OWASP Foundation

AI/ML Security work being done: The OWASP foundation more broadly aims to improve the security of software through its community-led open source software projects. They have an AI Security Guide and broadly light on technical implementation. OWASP Project Machine Learning Security Top 10 provides developer centered information about the top known cybersecurity risks for open source machine learning, with a description, example attack scenario, and a suggestion of how to prevent it.


The LFAI Security Committee

LFAI supports AI and Data open source projects through incubation, education, and best practices.

AI Security Regulatory Organizations

ETSI’s Securing Artificial Intelligence (SAI) focus on common sense regulation and standards around AI/ML that are compliant to exclusively EU regulation. Similarly, the AIRS group is an independent working group to advance AI Risk aimed at security governance. The CSA Artificial Intelligence group was similarly established solely for regulation, and only scoped to cloud services. IEE’s AI/ML efforts are also largely regulatory advisements.


Central Nervous System Infections: Addressing Security Concerns with Large Language (and Large Anything Models)

In the context of AI, security concerns around the data and model processing around large langauge (and large anything models) can be likened to a central nervous system infection, affecting the core functionality of the entire system.


While traditional cybersecurity threats often focus on breaches, data leaks, and unauthorized access, the challenges posed by Large Language Models (LLMs) and other generative technologies extend beyond conventional threats. Understanding these distinctive aspects is crucial for effectively addressing security concerns in this domain.


Hallucinations

One of the notable challenges associated with LLMs is the problem of hallucinations or the generation of false information. Due to their ability to generate human-like responses based on training data, LLMs may inadvertently produce content that lacks factual accuracy or even fabricate entirely false narratives. This presents a unique security risk as it can lead to the dissemination of misinformation, manipulation of public opinion, and potential damage to the credibility of AI systems. Mitigating this risk requires a combination of careful training data curation, robust fact-checking mechanisms, and ongoing validation processes to ensure the veracity of the generated content. This is a security problem when downloading open source package registries at the moment.


Internal Drift

Another significant concern is internal drift within LLMs and generative models. Over time, these models can exhibit a shift in behavior, generating responses that deviate from the intended purpose or ethical guidelines. This phenomenon poses a challenge to maintaining control and predictability in AI systems, as it introduces an element of uncertainty that can be exploited by adversaries. Addressing internal drift requires continuous monitoring and adaptation, establishing feedback loops between developers and the models, and implementing techniques such as reinforcement learning to reinforce desired behavior and align the outputs with ethical standards.


Proactive threat detection involves actively monitoring the behavior of LLMs and generative models to identify potential vulnerabilities or anomalies in their output. Secure development practices entail incorporating robust security measures throughout the entire development lifecycle, including secure coding practices, vulnerability testing, and adherence to established industry standards.


A Holistic Treatment for AI Security

To ensure the longevity and trustworthiness of AI, we’ll have to integrate security considerations throughout the AI development lifecycle, establishing strong partnerships with cybersecurity experts, and fostering a culture of security awareness and education. Implementing ethical guidelines, robust encryption practices, and continuous monitoring.

.

It’s All About Attention

I’ve spent a lot of my life thinking about how humans interact with computers and computers that interact back with humans. I used to study how we could design better Boeing Airline cockpits by getting real pilots into a 4D flight simulator of potentially life-threatening conditions to see where they looked, how they moved, and what they said to prevent a crash - or not. Later, I worked at the National Institutes of Health on real-time fMRI neurofeedback, where we timed simple visual feedback to real-time brain networks to improve clinical psychiatric outcomes without any pill-based intervention. Just attention.


Attention is powerful. It’s probably the most powerful thing to learn about cybersecurity. When threat modeling any technical stack, you have to be able to think about where are people aren’t paying attention. That’s your most exploitable surface area.


Emerging technologies are always a really interesting place to observe people’s attention. There hasn’t really been an emerging technology that so many people have had an opinion on before, because this time it’s just a face-to-deep technology that’s genuinely easy to start using. That’s beautiful. Attention and awareness are different things though. We need to understand how the whole thing works.


You Should Be Paying Attention to Chip Security

AI Chips: The Heart of the AI Body

AI-specific chips, often referred to as neural processing units (NPUs) or graphics processing units (GPUs), serve as the vital heart of the AI ecosystem. Much like the human heart, these specialized hardware components play a crucial role in enabling the efficient and optimal functioning of AI systems. In this series, we delve into the significance of chip technology and its pivotal role in ensuring the security and robustness of the AI body.

The Powerhouse Organs:

AI-specific chips are specifically designed to accelerate AI computations, enabling exceptional performance, efficiency, and speed. These chips are tailored to meet the unique requirements of AI workloads, such as training deep neural networks and running complex AI algorithms. Just like the heart pumps blood to supply oxygen and nutrients to the body, AI-specific chips empower AI systems to process and analyze vast amounts of data with remarkable speed and efficiency.

The Importance of Chip Technology:

Advancements in chip architectures and designs have led to significant breakthroughs in AI capabilities. Parallel processing, specialized matrix calculations, and optimized memory access are just a few examples of the technological advancements that AI-specific chips have brought to the field. These advancements push the boundaries of AI performance, enabling AI systems to tackle increasingly complex tasks and deliver accurate results.


Additionally, chip technology plays a vital role in addressing energy consumption concerns. As AI deployment expands, the energy efficiency of AI systems becomes a crucial factor. Advancements in chip technology contribute to reducing energy consumption, making AI more sustainable and environmentally friendly.


The Role of Secure Chip Technology:

As AI becomes more prevalent and critical in various sectors, ensuring the security of AI-specific chips becomes paramount.


The article "Security Becomes Much Bigger Issue For AI/ML Chips, Tools" has some great key points to understand, but I’ll boil them down to two that you need to understand today:


  1. The rapid development of AI and machine learning chips has led to increased potential for security threats, including intellectual property theft and attacks that can corrupt data or lead to ransomware (this can be supply chain and supplier threats)

  2. The need for standards that include both hardware and software. It suggests using multiple AI algorithms and diverse sets of training data to reduce the risk of attacks.


    If you are optimizing - yes, I said optimising, not just running, an AI system, you will be building a system that maximizes the compute over cost.


    You can do this with hyperscalers (kind, of) but what if you are building one from scratch? You’ve got NVidia (GPUs) and Intel (CPUs and VPUs), and Google with Tensor Processing Unit (TPU) and that’s just the ones who have been on the market the longest - emerging tech means emerging market.


    But, oof, do we have some computational power these days, as seen in the “Blessings of Scale” from Compute Trends Across Three Eras of Machine Learning.



Securing the Evolving AI Body:

Just as the heart is vital for maintaining overall health, ensuring the security of AI-specific chips is crucial for the resilience and trustworthiness of the AI body. Secure chip technology must be integrated into every stage of the AI development process, from chip design and manufacturing to deployment and maintenance. Collaborative efforts among chip manufacturers, AI developers, and cybersecurity experts are essential to identify and address potential vulnerabilities in chip architectures and mitigate emerging threats. By prioritizing chip security, we can ensure that the AI body remains robust, efficient, and secure.



Where do you want to see this go next? Town Hall mode is on this Hackernoon article, so feel free to comment anywhere on this article where you have a question, comment or better idea than myself. The author will absolutely read it.