The past weekend was a whirlwind of coffee, code, and countless “Why isn’t this working?” moments. My goal? Build a functional chatbot from scratch in just 48 hours. This wasn’t for a hackathon or a client—it was a personal challenge to push my limits and dive into conversational AI, something I’d been fascinated by for months. Two days later, I had a working chatbot, a wealth of new knowledge, and a humbling appreciation for the complexity of AI. Here’s the full story, complete with code, tech stack choices, and the behind-the-scenes struggles that made it happen: 1. The Planning Phase Is as Important as Coding I made the rookie mistake of jumping straight into coding without a clear plan. My initial excitement had me typing out Flask routes and API calls within the first hour, but by hour four, I was rewriting half my logic because I hadn’t defined the chatbot’s purpose or user flow. What I Did Wrong: I assumed I could “figure it out as I go.” Without a conversation map, my bot was a mess—greeting users inconsistently and failing to handle unexpected inputs. What I Did Wrong The Fix: I paused to sketch a basic conversation flow: The Fix Greet the user warmly.Respond to simple questions (e.g., “What’s your name?”, “What can you do?”).Handle “I don’t understand” with a fallback response.Maintain context for the last two messages. Greet the user warmly. Respond to simple questions (e.g., “What’s your name?”, “What can you do?”). Handle “I don’t understand” with a fallback response. Maintain context for the last two messages. Spending just 2 hours on this blueprint saved me 6 hours of confusion later. Think of it like building a house—you wouldn’t start without a floor plan. 2. Choosing the Right Tech Stack With only 48 hours, I needed tools that were fast to set up, reliable, and beginner-friendly for conversational AI. After some research, I settled on: Backend: Python with Flask—lightweight, easy to deploy, and perfect for rapid prototyping.NLP: OpenAI’s GPT-3.5 API—its pre-trained model meant I didn’t have to build natural language processing from scratch.Frontend: A simple HTML/CSS/JavaScript chat window—minimal but functional for user interaction.Hosting: Vercel—for instant deploys and zero server management hassle. Backend: Python with Flask—lightweight, easy to deploy, and perfect for rapid prototyping. Backend NLP: OpenAI’s GPT-3.5 API—its pre-trained model meant I didn’t have to build natural language processing from scratch. NLP Frontend: A simple HTML/CSS/JavaScript chat window—minimal but functional for user interaction. Frontend Hosting: Vercel—for instant deploys and zero server management hassle. Hosting Why This Stack? Why This Stack? Flask: I chose Flask over Django for its simplicity. I needed a backend that could handle API requests without boilerplate overhead.OpenAI GPT-3.5: Training my own NLP model was out of the question for a 48-hour project. GPT-3.5 offered robust language understanding with minimal setup.HTML/CSS/JS: A basic chat interface was enough to test functionality. I avoided heavy frameworks like React to save time.Vercel: Its seamless deployment from a GitHub repo let me focus on coding rather than server configuration. Flask: I chose Flask over Django for its simplicity. I needed a backend that could handle API requests without boilerplate overhead. Flask OpenAI GPT-3.5: Training my own NLP model was out of the question for a 48-hour project. GPT-3.5 offered robust language understanding with minimal setup. OpenAI GPT-3.5 HTML/CSS/JS: A basic chat interface was enough to test functionality. I avoided heavy frameworks like React to save time. HTML/CSS/JS Vercel: Its seamless deployment from a GitHub repo let me focus on coding rather than server configuration. Vercel The temptation to experiment with “cooler” tools like FastAPI or spaCy was real, but deadlines demanded discipline. 3. Building a Minimum Viable Chatbot (MVC) I wanted the bot to do everything—sentiment analysis, multi-language support, voice commands—but time constraints forced me to focus on a Minimum Viable Chatbot: Greet the user.Answer basic questions.Handle “I don’t understand” gracefully.Remember the last two messages for context. Greet the user. Answer basic questions. Handle “I don’t understand” gracefully. Remember the last two messages for context. Here’s how I built it, step by step. Step 1: Setting Up the Flask Backend I started by creating a simple Flask app to handle incoming messages and route them to the OpenAI API. Below is the core backend code. from flask import Flask, request, jsonify import openai import os app = Flask(__name__) # Set up OpenAI API key openai.api_key = os.getenv("OPENAI_API_KEY") # Store conversation history (last 2 messages for context) conversation_history = [] @app.route('/chat', methods=['POST']) def chat(): user_message = request.json.get('message') if not user_message: return jsonify({'error': 'No message provided'}), 400 # Append user message to history conversation_history.append({'role': 'user', 'content': user_message}) # Keep only the last 2 messages for context if len(conversation_history) > 4: # 2 user messages + 2 bot responses conversation_history[:] = conversation_history[-4:] try: # Call OpenAI API with conversation history response = openai.ChatCompletion.create( model="gpt-3.5-turbo", messages=[ {"role": "system", "content": "You are a friendly chatbot. Keep responses concise and conversational."}, *conversation_history ] ) bot_response = response.choices[0].message['content'] conversation_history.append({'role': 'assistant', 'content': bot_response}) return jsonify({'response': bot_response}) except Exception as e: return jsonify({'error': str(e)}), 500 if __name__ == '__main__': app.run(debug=True) from flask import Flask, request, jsonify import openai import os app = Flask(__name__) # Set up OpenAI API key openai.api_key = os.getenv("OPENAI_API_KEY") # Store conversation history (last 2 messages for context) conversation_history = [] @app.route('/chat', methods=['POST']) def chat(): user_message = request.json.get('message') if not user_message: return jsonify({'error': 'No message provided'}), 400 # Append user message to history conversation_history.append({'role': 'user', 'content': user_message}) # Keep only the last 2 messages for context if len(conversation_history) > 4: # 2 user messages + 2 bot responses conversation_history[:] = conversation_history[-4:] try: # Call OpenAI API with conversation history response = openai.ChatCompletion.create( model="gpt-3.5-turbo", messages=[ {"role": "system", "content": "You are a friendly chatbot. Keep responses concise and conversational."}, *conversation_history ] ) bot_response = response.choices[0].message['content'] conversation_history.append({'role': 'assistant', 'content': bot_response}) return jsonify({'response': bot_response}) except Exception as e: return jsonify({'error': str(e)}), 500 if __name__ == '__main__': app.run(debug=True) Key Function Explained: The /chat endpoint receives a user’s message via a POST request, appends it to the conversation history, and sends it to OpenAI’s GPT-3.5 model with a system prompt to keep responses friendly and concise. The response is then stored in the history and sent back to the frontend. Key Function Explained Challenge: I initially forgot to limit the conversation history, causing the API token usage to skyrocket. I fixed this by capping the history at the last four messages (two user messages + two bot responses). Challenge Step 2: Building the Frontend The frontend was a simple HTML page with a chat window. I used vanilla JavaScript to handle user input and display responses. Below is the core code. <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Simple Chatbot</title> <style> body { font-family: Arial, sans-serif; max-width: 600px; margin: 20px auto; } #chat-window { border: 1px solid #ccc; padding: 10px; height: 400px; overflow-y: scroll; } #user-input { width: 80%; padding: 10px; } button { padding: 10px; } .user-message { color: blue; } .bot-message { color: green; } </style> </head> <body> <div id="chat-window"></div> <input type="text" id="user-input" placeholder="Type your message..."> <button onclick="sendMessage()">Send</button> <script> async function sendMessage() { const input = document.getElementById('user-input'); const chatWindow = document.getElementById('chat-window'); const message = input.value.trim(); if (!message) return; // Display user message chatWindow.innerHTML += `<div class="user-message">You: ${message}</div>`; input.value = ''; chatWindow.scrollTop = chatWindow.scrollHeight; // Send message to backend try { const response = await fetch('/chat', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ message }) }); const data = await response.json(); if (data.response) { chatWindow.innerHTML += `<div class="bot-message">Bot: ${data.response}</div>`; } else { chatWindow.innerHTML += `<div class="bot-message">Bot: Sorry, something went wrong!</div>`; } } catch (error) { chatWindow.innerHTML += `<div class="bot-message">Bot: Error: ${error.message}</div>`; } chatWindow.scrollTop = chatWindow.scrollHeight; } // Allow sending message with Enter key document.getElementById('user-input').addEventListener('keypress', (e) => { if (e.key === 'Enter') sendMessage(); }); </script> </body> </html> <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Simple Chatbot</title> <style> body { font-family: Arial, sans-serif; max-width: 600px; margin: 20px auto; } #chat-window { border: 1px solid #ccc; padding: 10px; height: 400px; overflow-y: scroll; } #user-input { width: 80%; padding: 10px; } button { padding: 10px; } .user-message { color: blue; } .bot-message { color: green; } </style> </head> <body> <div id="chat-window"></div> <input type="text" id="user-input" placeholder="Type your message..."> <button onclick="sendMessage()">Send</button> <script> async function sendMessage() { const input = document.getElementById('user-input'); const chatWindow = document.getElementById('chat-window'); const message = input.value.trim(); if (!message) return; // Display user message chatWindow.innerHTML += `<div class="user-message">You: ${message}</div>`; input.value = ''; chatWindow.scrollTop = chatWindow.scrollHeight; // Send message to backend try { const response = await fetch('/chat', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ message }) }); const data = await response.json(); if (data.response) { chatWindow.innerHTML += `<div class="bot-message">Bot: ${data.response}</div>`; } else { chatWindow.innerHTML += `<div class="bot-message">Bot: Sorry, something went wrong!</div>`; } } catch (error) { chatWindow.innerHTML += `<div class="bot-message">Bot: Error: ${error.message}</div>`; } chatWindow.scrollTop = chatWindow.scrollHeight; } // Allow sending message with Enter key document.getElementById('user-input').addEventListener('keypress', (e) => { if (e.key === 'Enter') sendMessage(); }); </script> </body> </html> Key Function Explained: The sendMessage function captures the user’s input, displays it in the chat window, sends it to the Flask backend via a POST request, and appends the bot’s response. The chat window auto-scrolls to show the latest messages. Key Function Explained Challenge: My first version didn’t handle the Enter key for sending messages, which felt clunky. Adding the keypress event listener improved the user experience significantly. Challenge Step 3: Deploying with Vercel To make the chatbot accessible online, I deployed it to Vercel. I structured my project as follows: app.py: The Flask backend.static/index.html: The frontend.vercel.json: Configuration for Vercel to route API requests to Flask and serve static files. app.py: The Flask backend. static/index.html: The frontend. vercel.json: Configuration for Vercel to route API requests to Flask and serve static files. { "version": 2, "builds": [ { "src": "app.py", "use": "@vercel/python" }, { "src": "static/**", "use": "@vercel/static" } ], "routes": [ { "src": "/chat", "dest": "app.py" }, { "src": "/(.*)", "dest": "static/$1" } ] } { "version": 2, "builds": [ { "src": "app.py", "use": "@vercel/python" }, { "src": "static/**", "use": "@vercel/static" } ], "routes": [ { "src": "/chat", "dest": "app.py" }, { "src": "/(.*)", "dest": "static/$1" } ] } Challenge: Vercel’s serverless environment required a requirements.txt to install dependencies like flask and openai. I forgot to include gunicorn, which caused deployment failures. After adding it, the app deployed smoothly. Challenge Step 4: Integrating OpenAI’s API To use GPT-3.5, I signed up for an OpenAI account, generated an API key, and stored it as an environment variable in Vercel. The API call in app.py sends the conversation history to ensure context-aware responses. Challenge: I hit rate limits during testing because I was sending too many requests while debugging. I solved this by mocking API responses locally with sample data until the logic was stable. Challenge 4. Debugging Debugging consumed nearly 50% of my 48 hours. Common issues included: JSON Errors: A missing comma in a request body cost me 20 minutes.Server Restarts: Forgetting to restart Flask after code changes wasted another 10 minutes.CORS Issues: The frontend couldn’t talk to the backend due to CORS. I added flask-cors to fix it. JSON Errors: A missing comma in a request body cost me 20 minutes. JSON Errors Server Restarts: Forgetting to restart Flask after code changes wasted another 10 minutes. Server Restarts CORS Issues: The frontend couldn’t talk to the backend due to CORS. I added flask-cors to fix it. CORS Issues Solution: I started logging all API requests and responses to a file (debug.log) and used console.log extensively in the frontend. This helped me catch errors faster. Solution 5. User Testing Revealed My Blind Spots When I let two friends test the chatbot, they broke it in ways I hadn’t anticipated: One typed “Yo bot, what’s up?” instead of “Hello,” and the bot responded awkwardly.Another asked a question with typos (“Whaat is yor name?”), and the bot failed to parse it. One typed “Yo bot, what’s up?” instead of “Hello,” and the bot responded awkwardly. Another asked a question with typos (“Whaat is yor name?”), and the bot failed to parse it. Solution: I updated the system prompt in the OpenAI API call to handle casual language and typos better. I also added a fallback response: “Sorry, I’m not sure what you mean! Try rephrasing, and I’ll do my best.” Solution 6. Deadlines Are the Best Teachers By the end of the 48 hours, I had a chatbot that could hold a basic conversation. It wasn’t perfect—no sentiment analysis, no multi-language support—but it worked. The deadline forced me to prioritize functionality over perfectionism, teaching me more in two days than weeks of casual tinkering. Final Thoughts Building a chatbot in 48 hours was a crash course in focus, problem-solving, and resilience. Speed forced me to make smart trade-offs and prioritize what mattered: a functional bot with a decent user experience. My Advice for Aspiring Chatbot Builders: My Advice for Aspiring Chatbot Builders Start with a clear goal and conversation map.Keep the first version minimal but functional.Test with real users early to catch unexpected behaviors. Start with a clear goal and conversation map. Keep the first version minimal but functional. Test with real users early to catch unexpected behaviors. For more tech stories, tutorials, and AI experiments, check out my blog at Tech Gadget Orbit. It’s where I share everything I learn from my hands-on projects in AI, software development, and emerging tech. For more tech stories, tutorials, and AI experiments, check out my blog at Tech Gadget Orbit. It’s where I share everything I learn from my hands-on projects in AI, software development, and emerging tech. Tech Gadget Orbit