paint-brush
How to Use an Uncensored AI Model and Train It With Your Databy@jeferson
30,962 reads
30,962 reads

How to Use an Uncensored AI Model and Train It With Your Data

by Jeferson BorbaDecember 25th, 2023
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

Mistral is a French startup, created by former Meta and DeepMind researchers. Under the Apache 2.0 license, this model claims to be more powerful than LLaMA 2 and ChatGPT 3.5, all that while being completely open-source. We are going to learn how to use it uncensored and discover how to train it with our data.

People Mentioned

Mention Thumbnail

Company Mentioned

Mention Thumbnail
featured image - How to Use an Uncensored AI Model and Train It With Your Data
Jeferson Borba HackerNoon profile picture

The days when ChatGPT was the singular solution in the AI industry are long past. New players like LLaMA and Gemini, developed by Meta and Google respectively, have entered the field. Despite the different tools and implementations, they share a commonality: they are closed-source (with some exceptions for LLaMA) and under the control of big tech companies.


This article explores a new contender in the AI industry, boasting an open-source tool that outperforms ChatGPT 3.5 and can be run locally. We will also learn how to use it uncensored and how to train it with our own data.

Introducing Mistral 8x7B

Mistral is a French startup, established by former Meta and DeepMind researchers. Leveraging their extensive knowledge and experience, they successfully raised US$ 415 million in investments, bringing Mistral's valuation to US$ 2 billion.

Mistral 8x7B magnet link, posted on Dec 8

The team at Mistral began gaining traction when they dropped a torrent link on X to their new model, Mistral 8x7B. According to the Apache 2.0 license, this model is not only more powerful than LLaMA 2 and ChatGPT 3.5 but also completely open-source.

Mistral Power and Capabilities

  • Handles a context of 32k tokens.


  • Functions in English, German, Spanish, Italian, and French.


  • Exhibits excellent performance when generating code.


  • Can be transformed into an instruction-following model.


In tests, Mistral demonstrated remarkable power, surpassing LLaMA 2 70B in the majority of benchmarks and also either matching or outperforming ChatGPT 3.5 in other benchmarks.

Comparison between Mistral, LLaMA, and GPT (from https://mistral.ai/news/mixtral-of-experts)

Running Mistral Locally

Moving beyond the figures and tables, let's start getting practical. First, we'll need a tool to help us run it locally: Ollama. MacOS users can download the file here. For Linux or WSL users, paste the following commands into your terminal:

curl https://ollama.ai/install.sh | sh


We can then run LLMs locally, but we're not simply aiming for an AI to answer random questions - that's what ChatGPT is for. We're aiming for an uncensored AI that we can tweak and fine-tune according to our preferences.


Considering this, we will use dolphin-mistral, a custom version of Mistral that lifts all constraints. To learn more about how dolphin-mistral removed these constraints, check out this article from its creator.


Run the following command in your terminal to start running Ollama on your computer:

ollama serve


Then, in another terminal, run:

ollama run dolphin-mixtral:latest


The initial download may be time-consuming as it requires downloading 26GB. Once the download is complete, mistral will await your input.

Prompt of dolphin-mistral

Remember, running dolphin-mistral requires substantial system resources, particularly RAM.

Usage of resources by dolphin-mistral

Training Your Own Model

Now, you may be wondering about the possibilities of training mistral with your data. The answer is a resounding yes.


Start by creating an account on Hugging Face (if you haven't already), and then create a new space.

Space creation on Hugging Face

Choose Docker for Autotrain

Selecting the Space SDK

From here, you can select your model, upload your data, and start training. Training a model on a home computer can be challenging due to hardware demands.


Services like Hugging Face offer computing power (for a fee), but you can also consider Amazon Bedrock or Google Vertex AI to expedite the process.