paint-brush
How to Use Ollama: Hands-On With Local LLMs and Building a Chatbotby@arjunrao1987
9,173 reads
9,173 reads

How to Use Ollama: Hands-On With Local LLMs and Building a Chatbot

by Arjun 7mMarch 14th, 2024
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

In the space of local LLMs, I first ran into LMStudio. While the app itself is easy to use, I liked the simplicity and maneuverability that Ollama provides. To learn more about Ollama you can go here. tl;dr: Ollama hosts its own curated list of models that you have access to. You can download these models to your local machine, and then interact with those models through a command line prompt. Alternatively, when you run the model, Ollama also runs an inference server hosted at port 11434 (by default) that you can interact with by way of APIs and other libraries like Langchain.
featured image - How to Use Ollama: Hands-On With Local LLMs and Building a Chatbot
Arjun  HackerNoon profile picture
Arjun

Arjun

@arjunrao1987

I build and grow web-scale startups. I have 13+ years of experience driving innovation and delivering results in fast-pa

L O A D I N G
. . . comments & more!

About Author

Arjun  HackerNoon profile picture
Arjun @arjunrao1987
I build and grow web-scale startups. I have 13+ years of experience driving innovation and delivering results in fast-pa

TOPICS

Languages

THIS ARTICLE WAS FEATURED IN...

Permanent on Arweave
Read on Terminal Reader
Read this story in a terminal
 Terminal
Read this story w/o Javascript
Read this story w/o Javascript
 Lite