Today's article is the first article of a 3 to 4-part tutorial series where we'll be working towards building our very own custom Q&A chatbot. Table of Contents Introduction Generating an OpenAI API key Interacting with OpenAI’s completion endpoints Setting up a Node/Express app Closing thoughts We'll achieve our goal of building a custom Q&A chatbot by: Building a Node/Express server to interact with OpenAI’s APIs (today’s email). Using React to build the UI of our Q&A chatbot. Finally, we’ll then investigate how to fine-tune our app to have our Q&A chatbot return custom information. Our final app will look something like the following: Today, we’ll be focusing solely on creating a Node.js server where we can interact directly with OpenAI’s APIs. This is a precursor to setting up our front-end app that would then interact with the local API we’ll create. If you want to follow along, you’ll need Node and NPM installed in your machine and an OpenAI API key (we’ll show you how to get this in the next section). Generating an OpenAI API key Follow these steps to generate an API key with OpenAI: Sign up for an account by going to the OpenAI website ( ). https://platform.openai.com/ Once an account is created, visit the API keys page at . https://platform.openai.com/account/api-keys Create a new key by clicking the “Create new secret key” button. When an API key is created, you’ll be able to copy the secret key and use it when you begin development. Note: OpenAI currently provides 18$ free credits for 3 months which is great since you won’t have to provide your payment details to begin interacting with the API for the first time. Setting up a Node/Express app We’ll now move towards creating a new directory for our Node project and we’ll call it . custom_chat_gpt mkdir custom_chat_gpt We’ll navigate into the new directory and we’ll run the following command to create a file. package.json npm init -y Once the file has been created appropriately, we’ll then install the three dependencies we’ll need for now. package.json npm install dotenv express openai : will allow us to load environment variables from a file when working locally. dotenv .env : is the Node.js framework we’ll use to spin up a Node server. express : is the Node.js library for the OpenAI API. openai We’ll then create a file named . The file is where we’ll build our Node.js/Express server. index.js index.js In the file, we’ll: index.js Include the module with . express require("express") We’ll then run the function to define the Express instance and assign it to a constant labeled . express() app We’ll specify a middleware in our Express instance (with ) to parse incoming JSON requests and place the parsed data in a body. app.use() req We’ll specify a variable that will be given a value that comes from a environment variable or if the environment variable is undefined. port PORT 5000 PORT const express = require("express"); const app = express(); app.use(express.json()); const port = process.env.PORT || 5000; We’ll then set up a route labeled that will act as the endpoint our client will trigger. In this route, we will expect a value to exist in the request body and if it doesn’t, we’ll throw an error. If the value does exist, we’ll simply return a response of status that contains the in a field. POST /ask prompt prompt 200 prompt message Lastly, we’ll run the function to have our app listen on the port value we’ve specified in the variable. app.listen() port const express = require("express"); const app = express(); app.use(express.json()); const port = process.env.PORT || 5000; // POST request endpoint app.post("/ask", async (req, res) => { // getting prompt question from request const prompt = req.body.prompt; try { if (prompt == null) { throw new Error("Uh oh, no prompt was provided"); } // return the result return res.status(200).json({ success: true, message: prompt, }); } catch (error) { console.log(error.message); } }); app.listen(port, () => console.log(`Server is running on port ${port}!!`)); With this change saved, it will be a good time to test that our changes work. We’ll run the server with: node index.js With our server running, we can then attempt to trigger our POST request through a command to verify our server is set up appropriately. /ask CURL curl -X POST \ http://localhost:5000/ask \ -H 'Content-Type: application/json' \ -d '{ "prompt": "Hi! This is a test prompt!" }' We’ll be provided with a successful response and our prompt returned to us. With our server now working as intended, we can move towards having our endpoint interact with OpenAI’s endpoint. /ask /completions Interacting with OpenAI’s completions endpoint OpenAI provides a in their API that provides completion suggestions for text input. /completions endpoint When we send a request to the endpoint and we include a prompt or seed text, the API will generate a continuation of that text based on its training data. /completions With this endpoint, we can build our own custom version of ChatGPT (with some caveat that ChatGPT most likely uses a more powerful Machine Learning model that isn’t available through the OpenAI API). /completions To interact with the OpenAI API, we will need the unique API key that we created earlier through the OpenAI website. Sensitive information, such as API keys, should not be hard-coded directly into the application source code. We’ll create a file in the root directory of our project to store environment variables that contain sensitive information. .env custom_chat_gpt .env // ... In the file, we’ll create a new environment variable labeled and give it the value of the OpenAI API key. .env OPENAI_API_KEY # your unique API key value goes here OPENAI_API_KEY=sk-############ In our file, we’ll require and use the module to load environment variables from the file into the environment of our application. We’ll also import the classes we’ll need from the Node.js library — the and classes. index.js dotenv .env process openai Configuration OpenAIApi // configure dotenv require("dotenv").config(); // import modules from OpenAI library const { Configuration, OpenAIApi } = require("openai"); // ... Next, we must create a configuration object for interacting with the OpenAI API. We’ll do this by instantiating the constructor and passing in the value of the environment variable to the field. Configuration() OPENAI_API_KEY apiKey const configuration = new Configuration({ apiKey: process.env.OPENAI_API_KEY, }); We’ll then set up a new instance of the OpenAI API class like the following: const configuration = new Configuration({ apiKey: process.env.OPENAI_API_KEY, }); const openai = new OpenAIApi(configuration); We can now use the variable we’ve created to make API calls and process responses from OpenAI. openai In our POST request function, we’ll run the function that essentially triggers a call to OpenAI’s completions endpoint. /ask openai.createCompletion() app.post("/ask", async (req, res) => { const prompt = req.body.prompt; try { if (prompt == null) { throw new Error("Uh oh, no prompt was provided"); } // trigger OpenAI completion const response = await openai.createCompletion(); // ... } catch (error) { console.log(error.message); } }); The OpenAI completions endpoint allows us to pass in a into the request to modify how we want our text completion to behave. For our use case, we’ll only look into providing values for two fields — and . large number of optional fields model prompt : specifies the name of the language model that the API should use to generate the response to the request. OpenAI provides several different language models, each with its strengths and capabilities. For our use case, we’ll specify that we want to use the model which is OpenAI’s . model text-davinci-003 most capable GPT-3 model : is the prompt that we want OpenAI to generate a completion for. Here we’ll just pass the value that exists in the body of our request. prompt prompt /ask app.post("/ask", async (req, res) => { const prompt = req.body.prompt; try { if (prompt == null) { throw new Error("Uh oh, no prompt was provided"); } // trigger OpenAI completion const response = await openai.createCompletion({ model: "text-davinci-003", prompt, }); // ... } catch (error) { console.log(error.message); } }); The text returned from the OpenAI response exists within a array that itself is within a object. We’ll attempt to access the text returned from the first choice returned in the API which will look like the following: choices response.data app.post("/ask", async (req, res) => { const prompt = req.body.prompt; try { if (prompt == null) { throw new Error("Uh oh, no prompt was provided"); } const response = await openai.createCompletion({ model: "text-davinci-003", prompt, }); // retrieve the completion text from response const completion = response.data.choices[0].text; // ... } catch (error) { console.log(error.message); } }); The last thing we’ll do is have this completion response be returned in the successful response of our request. With this change and all the changes we’ve made, our file will look like the following. /ask index.js require("dotenv").config(); const express = require("express"); const { Configuration, OpenAIApi } = require("openai"); const app = express(); app.use(express.json()); const configuration = new Configuration({ apiKey: process.env.OPENAI_API_KEY, }); const openai = new OpenAIApi(configuration); const port = process.env.PORT || 5000; app.post("/ask", async (req, res) => { const prompt = req.body.prompt; try { if (prompt == null) { throw new Error("Uh oh, no prompt was provided"); } const response = await openai.createCompletion({ model: "text-davinci-003", prompt, }); const completion = response.data.choices[0].text; return res.status(200).json({ success: true, message: completion, }); } catch (error) { console.log(error.message); } }); app.listen(port, () => console.log(`Server is running on port ${port}!!`)); We’ll save our changes and restart our server. With our server running, we can ask certain questions to our API like . "What is the typical weather in Dubai?" curl -X POST \ http://localhost:5000/ask \ -H 'Content-Type: application/json' \ -d '{ "prompt": "What is the typical weather in Dubai?" }' After waiting a few seconds, our API will return to us a valid answer to our question! That’s it! We’ve managed to build a simple API with Node/Express to interact with OpenAI’s completions endpoint. Next week, we’ll be continuing on this tutorial by building a React app that triggers the request when an input field is submitted. /ask Closing thoughts You can find the source code for this article at . github.com/djirdehh/frontend-fresh/articles_source_code Do interact with the endpoint and feel free to ask it any prompt that you would like! /completions If you're not a fan of using to test requests locally, Postman is a popular tool to test API requests through a client. curl Do read more on all the different optional fields we can provide in OpenAI's endpoint in the . /completion OpenAI API documentation Subscribe to for more tutorials like this to hit your inbox on a weekly basis! https://www.frontendfresh.com/ This original article was sent out by the frontend fresh newsletter.