A beginner-friendly, step-by-step guide to using Node.js and GPT-3 for classifying Replicate models
I built Replicate Codex, a platform that allows you to search, filter, and sort AI models to find the perfect fit for your AI project. To sort and filter the models, you need to have some tags (such as "Image-to-Image," "Text-to-Text," etc.) to classify them by. But the models don't come tagged on their own, so I have to review all their documentation find out what task they're meant for, and then assign them manually.
That's not scalable, so I thought, "What if I could use AI to help classify these models even more effectively?" That would fully automate the work, and might even be more accurate than if I did it myself.
I decided to use AI to classify the AI models, so you could build AI while using AI to help you... you get the picture. In this article, I will show you how to set up a Node.js script that uses GPT-3 to classify Replicate Codex models. It's like AI inception! Let's get started.
Replicate Codex is a platform designed to make it easier for developers, researchers, and businesses to discover and utilize AI models for their projects. The platform offers a wide range of pre-built AI models that users can search, filter, and sort to find the most suitable model for their specific needs.
By offering a comprehensive, easy-to-navigate repository of AI models, Replicate Codex helps accelerate AI adoption and development across various industries.
The context behind this project is to automate the process of tagging models. This will in turn help users find the right AI models for their projects without having to manually sift through countless options. This will save them time and money as they build projects!
Replicate Codex displays Replicate models, and Replicate models don't initially have tags, making it a daunting task to classify them all by hand. But with the help of GPT-3, we can streamline this process and make it more efficient. This tutorial demonstrates how to use a Node.js script to classify the models, and it even shows some future improvements coming soon, such as ingesting API specs to enhance the classification process.
I wrote a script that uses GPT-3 to do this classification. Let's see how it works.
First, we need to import the necessary modules and set up the environment variables:
import { createClient } from "@supabase/supabase-js";
import got from "got";
import dotenv from "dotenv";
dotenv.config();
Next, we define some constants and prepare the classification categories:
const openaiApiKey = process.env.OPENAI_SECRET_KEY;
const supabaseUrl = process.env.SUPABASE_URL;
const supabaseKey = process.env.SUPABASE_SERVICE_KEY;
const supabase = createClient(supabaseUrl, supabaseKey);
const types = ["Text", "Image", "Audio", "Video"];
const classificationCategories = types.flatMap((fromType) =>
types.map((toType) => `${fromType}-to-${toType}`)
);
We can combine any two types to create a tag ("Text-to-Image," etc.).
Now, we create the primary function for our script, called classifyModelsAndUpdateTags()
. It will fetch the models, generate a prompt, and call the GPT-3 API.
The classifyModelsAndUpdateTags
function has a few components. Let's take a look at them in more detail.
First, the function fetches all models from the Supabase database that don't have any tags associated with them:
const { data: models, error: fetchError } = await supabase
.from("modelsData")
.select("*")
.filter("tags", "eq", "");
await
is used here because the supabase query is an asynchronous operation. By using await
, the code execution is paused until the query completes, ensuring that the models variable is populated with data before proceeding.
After fetching the models, the function checks for any errors that might have occurred during the query. If there's an error, it's logged to the console, and the function returns early:
if (fetchError) {
console.error(fetchError);
return;
}
If the query is successful, the function then loops through each model, generating a prompt for GPT-3 based on the model's description. I pass in the model types and the description and other model info. The result is a classification. The prompt doesn't always return what I want but most of the time it does.
for (const model of models) {
const description = model.description ?? "No description provided.";
const prompt = `Classify the following model into one of the specified categories...`;
}
Inside the loop, the function makes a POST request to the GPT-3 API using the got library:
const response = await got
.post("https://api.openai.com/v1/engines/davinci/completions", {
json: { prompt, max_tokens: 10 },
headers: { Authorization: `Bearer ${openaiApiKey}` },
})
.json();
Once the response is received, it's cleaned up by removing non-alphanumeric characters and trimming extra spaces. I have to do this because sometimes GPT does not follow instructions.
const category = response.choices[0]?.text
.replace(/[^\w\s-]/g, "")
.trim();
I also validate the response. If the cleaned-up category is valid, the function updates the model's tags in the Supabase database:
if (classificationCategories.includes(category)) {
const { error: updateError } = await supabase
.from("modelsData")
.update({ tags: category })
.match({ id: model.id });
// Error handling and logging
}
Finally, the classifyModelsAndUpdateTags()
function is called to run the entire classification process:
classifyModelsAndUpdateTags();
So, to recap, the classifyModelsAndUpdateTags()
function fetches models without tags, generates a prompt for GPT-3, calls the GPT-3 API, and updates the model's tags in the database based on the classification received from GPT-3.
While the current script does a decent job of classifying models using GPT-3, there are ways to make it robust. Here's how I'd like to take it further:
To improve the classification accuracy, I'm going to suck in more data to feed the prompt. Sometimes models don't have very much context or details, so this will give GPT-3 more of a chance to make a good guess.
As the number of models grows, the script will take longer to execute. I can optimize the performance by implementing batch processing. There might also be better ways to fetch the data.
While GPT-3 is an excellent choice for this task, there are other AI models and techniques that could be explored to improve the classification process further. Investigating different classifier models could be a good next step.
In this tutorial, we've walked through a Node.js script that harnesses the power of GPT-3 to classify Replicate models. By using AI to classify AI models, we're taking a significant step towards streamlining the process of discovering and utilizing AI models in various projects. Kinda circular, but kinda fun!
We've also discussed how I can improve the script further. There's much more to build. If you're excited about the possibilities that AI has to offer and would like to stay updated on future developments and tutorials, make sure to subscribe to our mailing list and follow me on Twitter - I'm also happy to answer any questions you have. Thanks for reading!
Also published here.