paint-brush
Security in Generative AI Infrastructure Is of Critical Importanceby@manishsinhav
213 reads

Security in Generative AI Infrastructure Is of Critical Importance

by Manish SinhaNovember 16th, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Security is not an add-on to thoroughly flushed-out Generative AI infrastructure. It should be weaved in and through the entire infrastructure
featured image - Security in Generative AI Infrastructure Is of Critical Importance
Manish Sinha HackerNoon profile picture

The Generative AI industry has exploded, thanks to no small part to the release of ChatGPT in the fall of 2022, which took the world by storm. There has been a lot of hand-wrangling on the importance of model safety and data security, with the possibility of further AI regulations.


As per IBM, around 70% of their survey respondents say that innovation is more important than security, which contrasts with 82% of the respondents accepting that security in AI is essential to their business. Unsurprisingly, only 24% of the projects polled in the IBM survey have components meant to secure the initiative.


Security is not an add-on to thoroughly flushed-out Generative AI infrastructure. It should be weaved in and through the entire infrastructure rather than slapped on top of it to pass regulations and compliance. The immense power of Generative AI brings with it vast responsibilities. This article aims to convince readers of the importance of security in AI and provide them with the knowledge and talking points to become security champions in their organizations.

Objective

The objective of GenAI security should be to secure the entire infrastructure from end to end. As security champions, we must secure data ingestion infrastructure, ensuring data legitimacy, data manipulation operations, model deployment, and user interactions.


For the last year, people have been complaining about the degrading quality of answers provided by ChatGPT, whereas OpenAI has refuted such claims. While it is true that the quality of responses can deteriorate over time, it is entirely possible that an internally deployed model can be modified in an unauthorized manner by bad actors. Such responses can be indistinguishable from the typical quality degradation we expect.


It goes without saying that customer data, which consists of their prompt history and responses, should be considered critical data, and we need to examine any risks that might seriously compromise users' trust. While the conversations are between a human and a machine, the personal nature of such discussions should be treated just like human-to-human conversations and secured as such.

Risks

There is no limit to the kind of risks one has to deal with when working on GenAI infrastructure. While we are looking into the typical security best practices from traditional security paradigms, there are AI-specific steps we need to take to ensure that we achieve a reasonable security posture.


Data Poisoning: We have heard the old adage of garbage-in-garbage-out. If malicious actors manage to introduce malicious data into the corpus, the responses might not be what we expect. Imagine your company analyzes data and signals intelligence for your customers. If the data was poisoned by modification, deletion, or addition, then the results are going to be far from accurate. In fact, properly crafted training data can lead to wrong responses from the AI model.


Model Manipulation: While this sounds far-fetched, the National Institute of Standards and Technology has outlined how different types of cyberattacks manipulate the behavior of AI systems, focusing on evasion and privacy attacks. Model manipulation is closely tied to data poisoning; in practice, it’s hard to differentiate one from the other.


Prompt Injection Attacks: This attack vector is unique to Generation AI and is not in traditional cybersecurity practices. While a lot of research has gone into understanding the reasoning pathway of the models, for all practical purposes, they do remain a black box. Attackers can bypass safety controls by specially crafting input prompts, which lead to unauthorized responses.


These responses can contain confidential information or reveal some inner workings in the worst case. These attacks work because they exploit the very nature of Generative AI models, which deal with understanding natural languages.


Supply Chain Attacks: Generative AI is nascent and involves integrating various software components, sometimes with little regard to security, to beat the competitors in the field of innovation. Each third-party library or component introduces another point of attack.


Even malicious or modified data can fit under the umbrella of supply chain attacks. Refer to Open Worldwide Application Security Project (OWASP) LLM05: Supply Chain Vulnerabilities for more information.

Approaches

There isn’t a single way to handle security in the Generative AI space. Any attempts at creating a formal process run afoul of the dreaded checklist-based security compliance processes. At its core, security is not about processes and technology but a mindset and the culture that promotes it.


Fundamentally, to secure Generative AI infrastructure, one needs a secure-by-design architecture. This might include, but is not limited to, access control, the least privilege model, encryption at rest and transit, and threat/anomaly detection systems.


The architecture keeps evolving based on new research and changing business needs. Changes to architecture can increase the attack surface. To avoid this, there should be some form of operational review and annual security overview. One needs to have a culture where people are proactive about reviewing the changes and bold enough to discuss the possible threats.


Supply chain threats are more complex to prevent, but digitally signing all components and pieces and validation before deployment can be a good start. A little more involved process may be using Software Bill of Materials tracking, which is recommended by the Cybersecurity and Infrastructure Security Agency (CISA)

Conclusion

The job of a CISO is becoming increasingly complex and involved. Apart from the increased DDoS attacks as reported by Cloudflare, which is 20% year-over-year, the addition of Generative AI to the broader organization infrastructure adds more moving parts to an already complex setup. While moving ahead with innovation and sidelining security might look tempting at first, it is equivalent to playing with fire. One wrong move can result in disastrous results, whether lost customer trust, personal data leak, or government-imposed penalties.


An individual must be someone other than a CISO to push for a culture shift. Often, executives rely on people reporting to them to give them advice, which in turn is asked down the reporting chain. If Engineers or program Managers can make themselves security champions, they have a high chance of shining and access to opportunities for bigger responsibilities and a career boost.


The future of Generative AI hinges on our ability and willingness to secure it. Organizations that recognize this fact and act proactively will be best positioned to thrive and overcome dangers and regulatory hurdles.