Build Your Own AI Chatbot on Your Local PC — Online and Offline

Written by hacker95231466 | Published 2025/10/08
Tech Story Tags: python | artificial-intelligence | chatbots | data-science | openai | ollama | local-ai-chatbot | build-your-own-chatbot

TLDRYou can easily build a personal AI chatbot that runs both online (using OpenAI GPT) and offline (using Ollama local models) right from your local machine. via the TL;DR App

You can easily build a personal AI chatbot that runs both online (using OpenAI GPT) and offline (using Ollama local models) right from your local machine. It’s beneficial to integrate this feature into your new/existing application effectively and quickly.

Let’s go step-by-step.

1. Online Chatbot (Using OpenAI API)

This method uses OpenAI’s GPT models (like GPT-3.5 or GPT-4) and requires internet access.

Step 1: Install Python

Step 2: Install Required Packages

Run the following in CMD :

pip install openai colorama pyttsx3

Package Summary

Package

Purpose

openai

Connects GPT models

colorama

Adds colored text in CMD

pyttsx3

Enables voice output

Step 3: Generate OpenAI API Key

  1. Log in at: https://platform.openai.com/api-keys
  2. Click “Create new secret key”
  3. Copy your key (starting with ‘sk-…’)

Step 4: Set the API Key in Windows

In CMD, run the following command

setx OPENAI_API_KEY "sk-your_api_key_here"

Close and reopen your command prompt.

Step 5: Create Your Chatbot Script

Create a file named chatbot_ai.py and paste the below code:

import os
from colorama import Fore, Style, init
from openai import OpenAI
import pyttsx3

# Initialize color and voice
init(autoreset=True)
engine = pyttsx3.init()

# Define model
OPENAI_MODEL = "gpt-3.5-turbo"

# Connect OpenAI API
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))

print(Fore.CYAN + "\n🤖 AI Chatbot ready! Type 'exit' to quit.\n")
chat_history = []

def speak(text):
    engine.say(text)
    engine.runAndWait()

while True:
    user_input = input(Fore.GREEN + "You: " + Style.RESET_ALL)
    if user_input.lower() in ["exit", "quit", "bye"]:
        print(Fore.MAGENTA + "Chatbot: Goodbye! 👋")
        break

    chat_history.append({"role": "user", "content": user_input})

    try:
        response = client.chat.completions.create(
            model=OPENAI_MODEL,
            messages=chat_history
        )
        reply = response.choices[0].message.content.strip()
    except Exception as e:
        reply = f"⚠️ Error: {e}"

    print(Fore.YELLOW + "Chatbot:" + Style.RESET_ALL, reply)
    chat_history.append({"role": "assistant", "content": reply})
    speak(reply)

Step 6: Run the Chatbot

Navigate to the folder where you saved the script:

cd C:\path\to\your\script

python chatbot_ai.py

You’ll now get responses from GPTdirectly in your terminal.
Make sure yourOpenAI quota is active and not exhausted.

2. Offline Chatbot (Using Ollama Local Model)

Do you have a use case for running the chatbot without internet or API keys?

Yes, we could.

I am going to explain it in detailed steps by following here.

Use Ollama, a tool that runs large language models locally (like Llama 3, Mistral, Gemma).

Step 1: Install Ollama

Download and install from https://ollama.com/download

After installation, Ollama runs automatically in the background.

Verify installation:

ollama --version

Step 2: Download a Local Model

Pull the Llama 3 model (recommended):

ollama pull llama3

You can also try others:

ollama pull mistral
ollama pull gemma2

Step 3: Update Your Script for Offline Mode

Add the below changes to the same chatbot_ai.py file:

USE_OPENAI = False
OLLAMA_MODEL = "llama3"

import subprocess

try:
    result = subprocess.run(
        ["ollama", "run", OLLAMA_MODEL],
        input=user_input,
        text=True,
        capture_output=True
    )
    reply = result.stdout.strip()
except Exception as e:
    reply = f"⚠️ Ollama error: {e}"

Step 4: Run Your Offline Chatbot

Change directory to your script location:

cd C:\path\to\your\script

Run it:

python chatbot_ai.py

You can now chat without internet — responses are generated locally from Llama 3 or your chosen model.

OUTPUT:

Now you have your own personal AI assistant — online or offline, right on your PC!

Quick Comparison

Feature

Online GPT (OpenAI)

Offline Ollama

Internet Required

Yes

No

API Key Needed

Yes

No

Model

GPT-3.5 / GPT-4

Llama 3 / Mistral / Gemma

Speed

Fast (Cloud)

Depends on your PC

Cost

Pay-per-use

Free (after install)

Summary

  • Online Mode (OpenAI) → Uses GPT-3.5 or GPT-4 with internet
  • Offline Mode (Ollama) → Uses local models like Llama 3 without internet
  • Both can run inside your local command prompt using Python


Written by hacker95231466 | Healthcare Architect. Develop applications in C#.Net. Java , Python ,Typescript & SQL in both cloud native and on Prem servers.
Published by HackerNoon on 2025/10/08