paint-brush
How to Address Bugs, Security, & Reliability: AI for Web Devsby@austingil
110 reads

How to Address Bugs, Security, & Reliability: AI for Web Devs

by Austin GilFebruary 1st, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Welcome back to this series where we have been learning how to build web applications with AI. So far in this series, we’ve created a working app that uses AI to determine who would win in a fight between two user-provided opponents and generates text responses and images. It’s working, but we’ve been following the happy path.
featured image - How to Address Bugs, Security, & Reliability: AI for Web Devs
Austin Gil HackerNoon profile picture

Welcome back to this series where we have been learning how to build web applications with AI.

So far in this series, we’ve created a working app that uses AI to determine who would win in a fight between two user-provided opponents and generates text responses and images. It’s working, but we’ve been following the happy path.


  1. Intro & Setup
  2. Your First AI Prompt
  3. Streaming Responses
  4. How Does AI Work
  5. Prompt Engineering
  6. AI-Generated Images
  7. Security & Reliability
  8. Deploying


In this post, we’re going to talk about what happens when things don’t follow the happy path by accounting for error handling and security concerns.

Dealing With Bad HTTP Requests

The first issue to deal with is around our HTTP requests. So far, we’ve just been assuming that the requests from the client to the server will just work.

const response = await jsFormSubmit(form)

// Do something with response

This is a mistake. We need to account for situations where the server experiences an error or returns a bad status code.


In a real-world application, we would want a sophisticated notification service to communicate to users in the event of different errors (server error, validation error, authorization error, not found error, etc.). It would also be good to tie in an error and bug-tracking software so you get notified of any issues.


For today’s example, the discount brand will have to do. We’ll check the response ok property, and in case of a bad response, we’ll just alert the user that there was an error.

const response = await jsFormSubmit(form)

if (!response.ok) {
  state.isLoading = false
  alert("The request experienced an issue.")
  return
}

The code above only accounts for the HTTP request between the client and the server. Don’t forget, we have another request between the server and OpenAI.


Consider the scenario where OpenAI returns a bad status code. How should we communicate that to the end user on the client? This is also tricky and unique to each app. For the sake of convenience, we can do a similar check on the response.ok property.


In the event of a bad request, you’ll once again want to report on the error, and maybe respond to the user with the same status code. I would recommend against passing the response message to the client in case it contains sensitive data.

const response = await fetch('https://api.openai.com/v1/chat/completions', {
  // ... fetch options
})
if (!response.ok) {
  reportError(response)
  throw error(response.status, 'ERROR: Service unavailable');
}

This error handling is very rudimentary, and I’ll leave it like that because unfortunately, I’ve never seen two apps that handle errors the same way. It’s highly subjective.


Suffice it to say that you should spend time thinking about how your app should behave in the event of an error. How do you report it internally, and how do you communicate it to users?


And what happens when users deliberately try to break something…?

Dealing With Bad User Input

In addition to following the happy path where we assumed every HTTP request would always work, we assumed every user was benevolent. This is another mistake. Sometimes, users are malicious. Oftentimes, they are just plain silly. We should account for both.


Any time you receive user-submitted data, you have to validate it. In our app, we expect the user to submit two opponents. What happens if they submit just one, or none, or empty strings? We probably should catch that early before sending the malstructured prompt to OpenAI.


We can add the HTML required attribute to the textareas to tell the form that both inputs need to be filled before the form can be submitted. If the user tries to submit the form without the controls filled in, the browser will prevent the submission, focus on the first invalid input, and provide a little error message telling the user what the problem is.


This is good for the user experience because it provides some early feedback, but client-side validation is easily bypassed, so we have to also validate data on the server. Fortunately, there are some very good validation libraries available that can help with this. My favorite is called Zod. We can install it with npm install zod.


Zod allows us to define a schema that will be used to validate input data. If the input doesn’t match the schema, Zod can either throw an error or report it.


In our app, we are receiving the user input through the requestEvent.parseBody() method, which returns the submitted form data as an object containing opponent1 and opponent2 properties. So, what we need to do is create a validation schema, and then pass the form data into one of the schema validation methods.


I prefer not throwing an error, and instead getting an object back with the validation information. That way, I can add the logic myself to deal with bad data.


Inside my onPost middleware, before doing too much work, let’s make sure we have the right data:

import { z } from 'zod'
// ...

export const onPost: RequestHandler = async (requestEvent) => {
  const formData = await requestEvent.parseBody()

  const schema = z.object({
    opponent1: z.string().min(1),
    opponent2: z.string().min(1),
  })
  const validation = schema.safeParse(formData)

  if (!validation.success) {
    requestEvent.json(400, {
      errors: validation.error.issues
    })
    return 
  }

  // Continue with OpenAI API request and response
}

In the code above, I create an Object schema that should have two properties, opponent1 and opponent2. Both properties are required, must be strings, and cannot be empty. Passing the form data into the schema’s safeParse() method will return an object that can tell me if the validation was successful, what the error was, if any, and the validated data.


In the event of invalid data, I return early from the request handler with an HTTP 400 error response explaining the errors. 400 is the Bad Request status code.


One other thing I like to change is how I use the form data once it’s been validated. Zod also provides a data property on the returned object from safeParse.

const prompt = await promptTemplate.format({
  opponent1: validation.data.opponent1,
  opponent2: validation.data.opponent2
})

In our example, it doesn’t make too much of a difference whether we use this or the form data directly, but it’s nice to get in the habit of using the data property because Zod will coerce the data to the appropriate format.


Form data and query parameters are almost always received as strings, but if your Zod schema was expecting a number, it will try to coerce it for you, turning something like the string "420" into the number 420.


So that covers users sending missing data or not enough data, but what about sending too much?

By giving users unbounded input length that gets injected directly into our prompt, we are opening the gates for users to create massive prompts that would require a lot of tokens and cost us money.


Why don’t we add a maximum length to the inputs to something more appropriate for this app?


We can add a maximum length to both the server-side validation schema and the client-side validation attributes.

// Reusable constant
const MAX_INPUT_LENGTH = 50

// In our schema
const schema = z.object({
  opponent1: z.string().min(1).max(MAX_INPUT_LENGTH),
  opponent2: z.string().min(1).max(MAX_INPUT_LENGTH),
})

// In our template
<Input
  maxLength={MAX_INPUT_LENGTH}
/>

By reducing the amount of data a user can provide, we are reducing the amount of tokens that our API request could potentially use.


This step also limits the amount of flexibility a user has to manipulate our prompt. Consider the fact that a user could provide an “opponent” that actually contains malicious instructions for the app.


This segues nicely to a very important security concern for AI applications specifically.

Dealing With Injection Attacks

We’re doing some basic validation that the data we get from the user is the right type, but we aren’t checking the content that they send us. We’re just blindly sticking it into our prompt and sending it off, and this opens us up to a very interesting kind of attack called a prompt injection attack.


If you’ve done any work building applications with SQL, this may sound similar to an SQL injection attack, and that’s because it is. An SQL injection attack is when a user submits some data of the right type, but containing SQL commands that could be harmful if run.


Here’s an example. Let’s say our app had some SQL logic to select a user by ID based on the input provided:

const query = "SELECT * FROM Users WHERE UserId = " + inputId;

An attacker could provide the string '1 OR 1=1' as the input and would return the information of all the users. This is bad, but you can avoid it by using parameterized queries, stored procedures, or escaping user input. Unless you’re writing raw SQL queries, most tools protect against injection. If you’re interested, here’s more on prevention from OWASP.


Prompt injection is a bit different because prompts don’t have a structured language with specific keywords you can search for. Literally anything you (or the user) provide is a valid prompt, and there’s no easy delineation between what you write and what a user writes.


To their credit, OpenAI does include tactics for prompt engineering that encourage using delimiters to indicate separate parts of the input. This way, you can help the AI identify the different identities, like the system and the user.


It might look like this:

Translate the text after the delimiting characters "~~~~~":

~~~~~

[text to be translated]

This is a big improvement as it provides a clearer separation between the system and the user input. Still, it’s not without gaps.


Simon Willison has pointed out several examples of prompt injection attacks and why it may never be a solved problem:



It’s kind of scary and should make you think twice about building AI-powered apps. But this is a bit of a Pandora’s box. Even with its inherent vulnerabilities, AI is here to stay. My suggestion is to keep yourself up to date on attack vectors and incorporate several layers of security.

Conclusion

Building applications with the happy path in mind is great, but we must also address the not-so-happy path. It’s important to familiarize ourselves with points of failure and vulnerabilities and address them appropriately. This makes our applications more secure for our users and more reliable.


In this post, we discussed:

  • What happens when our app experiences HTTP errors?
  • How do we validate user input?
  • What are some security concerns specifically for AI apps?


This is not a comprehensive list, but I hope it serves as a good starting point. With this work out of the way, I think we are ready to launch our app to the world. We’ll do that in the next post.

  1. Intro & Setup
  2. Your First AI Prompt
  3. Streaming Responses
  4. How Does AI Work
  5. Prompt Engineering
  6. AI-Generated Images
  7. Security & Reliability
  8. Deploying


Thank you so much for reading. If you liked this article, and want to support me, the best ways to do so are to share itsign up for my newsletter, and follow me on Twitter.


Originally published on austingil.com.