paint-brush
How to Use Ollama: Hands-On With Local LLMs and Building a Chatbotby@arjunrao1987
11,945 reads
11,945 reads

How to Use Ollama: Hands-On With Local LLMs and Building a Chatbot

by Arjun March 14th, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

In the space of local LLMs, I first ran into LMStudio. While the app itself is easy to use, I liked the simplicity and maneuverability that Ollama provides. To learn more about Ollama you can go here. tl;dr: Ollama hosts its own curated list of models that you have access to. You can download these models to your local machine, and then interact with those models through a command line prompt. Alternatively, when you run the model, Ollama also runs an inference server hosted at port 11434 (by default) that you can interact with by way of APIs and other libraries like Langchain.
featured image - How to Use Ollama: Hands-On With Local LLMs and Building a Chatbot
Arjun  HackerNoon profile picture
Arjun

Arjun

@arjunrao1987

L O A D I N G
. . . comments & more!

About Author

Arjun  HackerNoon profile picture
Arjun @arjunrao1987

TOPICS

THIS ARTICLE WAS FEATURED IN...

Permanent on Arweave
Read on Terminal Reader
Read this story in a terminal
 Terminal
Read this story w/o Javascript
Read this story w/o Javascript
 Lite