ChatGPT (Generative Pre-trained Transformer) is a chatbot launched by OpenAI in November 2022. It is built on top of OpenAI’s GPT-3.5 family of large language models and is fine-tuned with both supervised and reinforcement learning techniques. ChatGPT was launched as a prototype on November 30 2022 and quickly gained attention for its detailed responses and articulate answers across many domains of knowledge. However, its uneven factual accuracy was identified as a significant drawback. In this article, we’ll learn how to use the OpenAI API to build a ChatGPT application on Flutter. Building this application we’ll need the following: API token: We will need an API token from OpenAI, you can get your API token from the OpenAI account dashboard. If you don't have an account you can create one. : http flutter package for handling http requests. http : The Provider package is an easy-to-use package that is basically a wrapper around that makes it easy to use and manage. It provides a state management technique that is used to manage a piece of data around the app. provider inheritedwidget animated text kit: a flutter package that contains a collection of cool text animations. : An SVG rendering and widget library for flutter, which allows planting and displaying Scalable Vector Graphic files. flutter_svg With all things set, let's start building. 🍾 🍻 Open your terminal and create your flutter app using flutter cli flutter create openai-chat When the app has been created, open the folder in your VSCode or whatever Text Editor you make use of. Open the folder and open the file, clear out the initial code that was created with the app - because we are going to start building our app from the ground up. lib main Your file will now look like this by creating a main.dart Stateful Widget: import 'package:flutter/material.dart'; void main() { WidgetsFlutterBinding.ensureInitialized(); runApp(const MyApp()); } class MyApp extends StatefulWidget { const MyApp({super.key}); @override State<MyApp> createState() => _MyAppState(); } class _MyAppState extends State<MyApp> { @override Widget build(BuildContext context) { return MaterialApp( title: "Open AI Chat", home: SafeArea( bottom: true, top: false, child: Scaffold( backgroundColor: const Color(0xff343541), appBar: AppBar( backgroundColor: const Color(0xff343541), leading: IconButton( onPressed: () {}, icon: const Icon( Icons.menu, color: Color(0xffd1d5db), ), ), elevation: 0, title: const Text("New Chat"), centerTitle: true, actions: [ IconButton( onPressed: () {}, icon: const Icon( Icons.add, color: Color(0xffd1d5db), ), ), ], ), body: Stack( [], ), ), ), ); } } Now that we have our app set up, we can start building all the different widgets. We aiming for four (4) different widgets: User Input Widget User message Widget AI Message Widget Loader Widget Create a folder called this will contain all four widgets that we will work on soon. widgets, User Input Widget import 'package:flutter/material.dart'; class UserInput extends StatelessWidget { final TextEditingController chatcontroller; const UserInput({ Key? key, required this.chatcontroller, }) : super(key: key); @override Widget build(BuildContext context) { return Align( alignment: Alignment.bottomCenter, child: Container( padding: const EdgeInsets.only( top: 10, bottom: 10, left: 5, right: 5, ), decoration: const BoxDecoration( color: Color(0xff444654), border: Border( top: BorderSide( color: Color(0xffd1d5db), width: 0.5, ), ), ), child: Row( children: [ Expanded( flex: 1, child: Image.asset( "images/avatar.png", height: 40, ), ), Expanded( flex: 5, child: TextFormField( onFieldSubmitted: (e) { }, controller: chatcontroller, style: const TextStyle( color: Colors.white, ), decoration: const InputDecoration( focusColor: Colors.white, filled: true, fillColor: Color(0xff343541), suffixIcon: Icon( Icons.send, color: Color(0xffacacbe), ), focusedBorder: OutlineInputBorder( borderSide: BorderSide.none, borderRadius: BorderRadius.all( Radius.circular(5.0), ), ), border: OutlineInputBorder( borderRadius: BorderRadius.all( Radius.circular(5.0), ), ), ), ), ), ], ), ), ); } } The accepts one parameter, the . We also we have the callback method that will come into player when the user submits their message. UserInput chatcontroller onFieldSubmitted User Message Widget class UserMessage extends StatelessWidget { final String text; const UserMessage({ Key? key, required this.text, }) : super(key: key); @override Widget build(BuildContext context) { return Container( padding: const EdgeInsets.all(8), child: Row( mainAxisAlignment: MainAxisAlignment.start, crossAxisAlignment: CrossAxisAlignment.start, children: [ Expanded( flex: 1, child: Padding( padding: const EdgeInsets.all(8.0), child: Image.asset( "images/avatar.png", height: 40, width: 40, fit: BoxFit.contain, ), ), ), Expanded( flex: 5, child: Padding( padding: const EdgeInsets.only( left: 3, top: 8, ), child: Text( text, style: const TextStyle( color: Color(0xffd1d5db), fontSize: 16, fontWeight: FontWeight.w700, ), ), ), ), ], ), ); } } The user message passes the user’s message as a parameter to the class which will be appended to the ListView. Usermessage AI Message Widget class AiMessage extends StatelessWidget { final String text; const AiMessage({ Key? key, required this.text, }) : super(key: key); @override Widget build(BuildContext context) { return Container( color: const Color(0xff444654), padding: const EdgeInsets.all(8), child: Row( crossAxisAlignment: CrossAxisAlignment.start, children: [ Expanded( flex: 1, child: Padding( padding: const EdgeInsets.all(8.0), child: Container( color: const Color(0xff0fa37f), padding: const EdgeInsets.all(3), child: SvgPicture.asset( "images/ai-avatar.svg", height: 30, width: 30, fit: BoxFit.contain, ), ), ), ), Expanded( flex: 5, child: AnimatedTextKit( animatedTexts: [ TypewriterAnimatedText( text, textStyle: const TextStyle( color: Color(0xffd1d5db), fontSize: 16, fontWeight: FontWeight.w700, ), ), ], totalRepeatCount: 1, ), ), ], ), ); } } The AI message passes the user message as a parameter to the class which will be appended to the ListView. AiMessage Using the we can animate our text using the typewriter animation. AnimatedTextKit package Loader Widget class Loading extends StatelessWidget { final String text; const Loading({ Key? key, required this.text, }) : super(key: key); @override Widget build(BuildContext context) { return Container( color: const Color(0xff444654), padding: const EdgeInsets.all(8), child: Row( crossAxisAlignment: CrossAxisAlignment.start, children: [ Expanded( flex: 1, child: Padding( padding: const EdgeInsets.all(8.0), child: Container( color: const Color(0xff0fa37f), padding: const EdgeInsets.all(3), child: SvgPicture.asset( "images/ai-avatar.svg", height: 30, width: 30, fit: BoxFit.contain, ), ), ), ), Expanded( flex: 5, child: Text( text, style: const TextStyle( color: Color(0xffd1d5db), fontSize: 16, fontWeight: FontWeight.w700, ), ), ), ], ), ); } } The Loader Widget is used to await a response from the API call, then the response is completed we remove the loader from the list. APP Constant const endpoint = "https://api.openai.com/v1/"; const aiToken = "sk-------------------------------------"; Create a file called this will contain our endpoint and API token, you can get your API token from OpenAI’s API token dashboard. api_constants.dart OpenAI Repository class OpenAiRepository { static var client = http.Client(); static Future<Map<String, dynamic>> sendMessage({required prompt}) async { try { var headers = { 'Authorization': 'Bearer $aiToken', 'Content-Type': 'application/json' }; var request = http.Request('POST', Uri.parse('${endpoint}completions')); request.body = json.encode({ "model": "text-davinci-003", "prompt": prompt, "temperature": 0, "max_tokens": 2000 }); request.headers.addAll(headers); http.StreamedResponse response = await request.send(); if (response.statusCode == 200) { final data = await response.stream.bytesToString(); return json.decode(data); } else { return { "status": false, "message": "Oops, there was an error", }; } } catch (_) { return { "status": false, "message": "Oops, there was an error", }; } } } Now, let’s communicate with the OpenAI API. We have to create a file called in the repository folder. In the file, we have a class called OpenAIRepository which has a static method called sendMessage that accepts just a single parameter openai_repository.dart prompt Authentication The OpenAI API uses API keys for authentication. Retrieve the API key you’ll use in your requests. All API requests should include your API key in an HTTP header as follows: Authorization Authorization: Bearer YOUR_API_KEY Making Request { "model": "text-davinci-003", "prompt": prompt, "temperature": 0, "max_tokens": 2000 } This request queries the Davinci model to complete the text starting with a prompt you sent from your user input. The parameter sets an upper bound on how many tokens the API will return. The means the model will take more risks. Try 0.9 for more creative applications, and 0 for ones with a well-defined answer. max_tokens temperature This will return a response that looks like this. Map<String, dynamic> { "id": "cmpl-GERzeJQ4lvqPk8SkZu4XMIuR", "object": "text_completion", "created": 1586839808, "model": "text-davinci:003", "choices": [ { "text": "\n\nThis is indeed a test", "index": 0, "logprobs": null, "finish_reason": "length" } ], "usage": { "prompt_tokens": 5, "completion_tokens": 7, "total_tokens": 12 } } ChatModel class ChatModel extends ChangeNotifier { List<Widget> messages = []; List<Widget> get getMessages => messages; Future<void> sendChat(String txt) async { addUserMessage(txt); Map<String, dynamic> response = await OpenAiRepository.sendMessage(prompt: txt); String text = response['choices'][0]['text']; //remove the last item messages.removeLast(); messages.add(AiMessage(text: text)); notifyListeners(); } void addUserMessage(txt) { messages.add(UserMessage(text: txt)); messages.add(const Loading(text: "...")); notifyListeners(); } } Since we are using provider as our State management, we create a class called which extends the We make an empty List<Widget> which we will use to push in new messages (Widget). A to get messages, ChatModel ChangeNotifier. getter getMessages We create a method called sendChat which takes the user input and then calls the which pushes a new widget containing the user message and also the loader widget to the messages list. addUserMessage Next, we send the prompt message to the OpenAI Repository which then sends back a response. We then store the text into a variable called String text. Next, we remove the Loader Widget from the List and add the AIMessage Widget 🤞🏽 Almost done… We have to go back to our userInput Widget and call when the user tries to submit his message. Your code will look much like this now. sendChat TextFormField( onFieldSubmitted: (e) { context.read<ChatModel>().sendChat(e); chatcontroller.clear(); }, 🚀 Hit it All have to do now, is to edit our file. Wrap the body in and your code will look something like this. main.dart MultiProvider body: MultiProvider( providers: [ ChangeNotifierProvider(create: (_) => ChatModel()), ], child: Consumer<ChatModel>(builder: (context, model, child) { List<Widget> messages = model.getMessages; return Stack( children: [ //chat Container( margin: const EdgeInsets.only(bottom: 80), child: ListView( children: [ const Divider( color: Color(0xffd1d5db), ), for (int i = 0; i < messages.length; i++) messages[i] ], ), ), //input UserInput( chatcontroller: chatcontroller, ) ], ); }), ), 🛸🚁 App Running All done, you can start using the ChatGPT on your Flutter App. You can also clone the repo right . here Also published . here Have any questions, drop your comment here and I will respond to them as soon as possible.