Do you want to use a local LLM inside Obsidian, similar to ChatGPT, completely free? If yes, then this guide is for you! I’ll walk you through the exact steps to install and use the DeepSeek-R1 Model in Obsidian so that you can have an AI-powered second brain right inside your notes. What is DeepSeek LLM? DeepSeek LLM is a local large language model (LLM) that lets you run AI directly on your device. Unlike online AI tools, DeepSeek keeps your data private, works offline, and gives you full control over your workflow. DeepSeek AI R1 has gained significant attention due to its advanced reasoning capabilities and efficiency. Here’s why it stands out: Advanced Reasoning: DeepSeek-R1 excels in complex problem-solving and mathematical reasoning, making it ideal for research and technical tasks. Efficient Resource Usage: Despite having approximately 670 billion parameters, it was trained using only 2,000 NVIDIA H800 chips, significantly reducing costs. Faster Training: The model was trained in just 55 days at a cost of $5.6 million, far less than other large-scale models. Open and Customizable: DeepSeek-R1’s open-weight framework allows users to fine-tune and adapt it for specific needs. Runs on Local Devices: Its compact size makes it feasible to run on personal computers and mobile devices, unlike many cloud-dependent AI models. What is Obsidian? In case you don’t know what Obsidian is. Obsidian is a powerful note-taking and knowledge management application that helps users organize information using a linked, graph-based structure. It is popular for its Markdown-based notes, customization through community plugins, and support for local-first, privacy-focused workflows. Many users refer to Obsidian as a “second brain” because it allows them to create a network of interconnected ideas, making it an excellent tool for researchers, writers, and productivity enthusiasts. How to Install DeepSeek LLM in Obsidian Step 1: Install Obsidian If you haven’t already, download and install Obsidian from https://obsidian.md Step 2: Install Ollama (Required for Running DeepSeek) DeepSeek LLM requires a tool called Ollama to run locally. Here’s how to install it: Go to https://ollama.ai, and download Ollama. Select your operating system (Windows, Mac, or Linux). The command for Linux curl -fsSL https://ollama.com/install.sh | sh Follow the installation instructions. Once installed, verify by opening a terminal and running: ollama --version If you see a version number, Ollama is installed correctly. Step 3: Download and Set Up DeepSeek LLM Now, install the DeepSeek-R1 model by running this command in your terminal: ollama run deepseek-r1:8b Other DeepSeek Models can be found here. Step 4: Install the Obsidian AI Plugin To connect DeepSeek with Obsidian, install an AI plugin: Open Obsidian, and go to Settings > Community Plugins. Click Browse, and search for Smart Second Brain or Obsidian AI. Click Install, then enable the plugin. Step 5: Configure the Plugin Once installed, configure the plugin: Open Settings > Plugin Settings. Set the API URL to http://localhost:11434 (this is the default for Ollama). In the Chat Model section, select deepseek. Save your settings, and restart Obsidian. How to Access DeepSeek Inside Obsidian Now you’re ready to use AI inside Obsidian! Click on the AI Chat icon in the left sidebar. Type your query in the chatbox, and hit Enter. DeepSeek will generate a response directly inside Obsidian. Here’s what you can do: Generate content — Ask DeepSeek to create text, ideas, or summaries. Summarize notes — Select text, and let AI generate a quick summary. Ask questions — Get answers on programming, math, grammar, and more. Translate text — Use AI for multilingual support. Extract insights — Pull key points from large documents. Brainstorm ideas — Get creative suggestions for projects or writing. Video Tutorial https://youtu.be/qAsGO5N7OCk?embedable=true Conclusion Using DeepSeek R1 with Obsidian gives you a powerful, private, and offline AI assistant. Give it a try, and let me know your thoughts in the comments below! If you found this guide helpful! Cheers! Do you want to use a local LLM inside Obsidian, similar to ChatGPT, completely free? If yes, then this guide is for you! I’ll walk you through the exact steps to install and use the DeepSeek-R1 Model in Obsidian so that you can have an AI-powered second brain right inside your notes. What is DeepSeek LLM? DeepSeek LLM is a local large language model (LLM) that lets you run AI directly on your device. Unlike online AI tools, DeepSeek keeps your data private, works offline, and gives you full control over your workflow. DeepSeek DeepSeek AI R1 has gained significant attention due to its advanced reasoning capabilities and efficiency. Here’s why it stands out: Advanced Reasoning: DeepSeek-R1 excels in complex problem-solving and mathematical reasoning, making it ideal for research and technical tasks. Advanced Reasoning: DeepSeek-R1 excels in complex problem-solving and mathematical reasoning, making it ideal for research and technical tasks. Advanced Reasoning: Efficient Resource Usage: Despite having approximately 670 billion parameters, it was trained using only 2,000 NVIDIA H800 chips, significantly reducing costs. Efficient Resource Usage: Despite having approximately 670 billion parameters, it was trained using only 2,000 NVIDIA H800 chips, significantly reducing costs. Efficient Resource Usage: Faster Training: The model was trained in just 55 days at a cost of $5.6 million, far less than other large-scale models. Faster Training: The model was trained in just 55 days at a cost of $5.6 million, far less than other large-scale models. Faster Training: Open and Customizable: DeepSeek-R1’s open-weight framework allows users to fine-tune and adapt it for specific needs. Open and Customizable: DeepSeek-R1’s open-weight framework allows users to fine-tune and adapt it for specific needs. Open and Customizable: Runs on Local Devices: Its compact size makes it feasible to run on personal computers and mobile devices, unlike many cloud-dependent AI models. Runs on Local Devices: Its compact size makes it feasible to run on personal computers and mobile devices, unlike many cloud-dependent AI models. Runs on Local Devices: What is Obsidian? In case you don’t know what Obsidian is . Obsidian is a powerful note-taking and knowledge management application that helps users organize information using a linked, graph-based structure. It is popular for its Markdown-based notes, customization through community plugins, and support for local-first, privacy-focused workflows. what Obsidian is Obsidian Many users refer to Obsidian as a “second brain” because it allows them to create a network of interconnected ideas, making it an excellent tool for researchers, writers, and productivity enthusiasts. How to Install DeepSeek LLM in Obsidian Step 1: Install Obsidian If you haven’t already, download and install Obsidian from https://obsidian.md Obsidian https://obsidian.md Step 2: Install Ollama (Required for Running DeepSeek) DeepSeek LLM requires a tool called Ollama to run locally. Here’s how to install it: Ollama Go to https://ollama.ai, and download Ollama. Select your operating system (Windows, Mac, or Linux). The command for Linux curl -fsSL https://ollama.com/install.sh | sh Follow the installation instructions. Once installed, verify by opening a terminal and running: Go to https://ollama.ai, and download Ollama. Go to https://ollama.ai , and download Ollama. https://ollama.ai Select your operating system (Windows, Mac, or Linux). The command for Linux curl -fsSL https://ollama.com/install.sh | sh Select your operating system (Windows, Mac, or Linux). The command for Linux curl -fsSL https://ollama.com/install.sh | sh operating system curl -fsSL https://ollama.com/install.sh | sh Follow the installation instructions. Follow the installation instructions. Once installed, verify by opening a terminal and running: Once installed, verify by opening a terminal and running: ollama --version ollama --version If you see a version number, Ollama is installed correctly. Step 3: Download and Set Up DeepSeek LLM Now, install the DeepSeek-R1 model by running this command in your terminal: ollama run deepseek-r1:8b ollama run deepseek-r1:8b Other DeepSeek Models can be found here . here Step 4: Install the Obsidian AI Plugin To connect DeepSeek with Obsidian, install an AI plugin: Open Obsidian, and go to Settings > Community Plugins. Click Browse, and search for Smart Second Brain or Obsidian AI. Click Install, then enable the plugin. Open Obsidian, and go to Settings > Community Plugins. Open Obsidian, and go to Settings > Community Plugins . Obsidian, Settings > Community Plugins Click Browse, and search for Smart Second Brain or Obsidian AI. Click Browse, and search for Smart Second Brain or Obsidian AI . Browse, Smart Second Brain Obsidian AI Click Install, then enable the plugin. Click Install , then enable the plugin. Install Step 5: Configure the Plugin Once installed, configure the plugin: Open Settings > Plugin Settings. Set the API URL to http://localhost:11434 (this is the default for Ollama). In the Chat Model section, select deepseek. Save your settings, and restart Obsidian. Open Settings > Plugin Settings. Open Settings > Plugin Settings . Settings > Plugin Settings Set the API URL to http://localhost:11434 (this is the default for Ollama). Set the API URL to http://localhost:11434 (this is the default for Ollama). API URL http://localhost:11434 In the Chat Model section, select deepseek. In the Chat Model section, select deepseek . Chat Model deepseek Save your settings, and restart Obsidian. Save your settings, and restart Obsidian. How to Access DeepSeek Inside Obsidian Now you’re ready to use AI inside Obsidian! Click on the AI Chat icon in the left sidebar. Type your query in the chatbox, and hit Enter. DeepSeek will generate a response directly inside Obsidian. Click on the AI Chat icon in the left sidebar. Click on the AI Chat icon in the left sidebar. AI Chat icon Type your query in the chatbox, and hit Enter. Type your query in the chatbox, and hit Enter. DeepSeek will generate a response directly inside Obsidian. DeepSeek will generate a response directly inside Obsidian. Here’s what you can do: Generate content — Ask DeepSeek to create text, ideas, or summaries. Summarize notes — Select text, and let AI generate a quick summary. Ask questions — Get answers on programming, math, grammar, and more. Translate text — Use AI for multilingual support. Extract insights — Pull key points from large documents. Brainstorm ideas — Get creative suggestions for projects or writing. Generate content — Ask DeepSeek to create text, ideas, or summaries. Generate content Summarize notes — Select text, and let AI generate a quick summary. Summarize notes Ask questions — Get answers on programming, math, grammar, and more. Ask questions Translate text — Use AI for multilingual support. Translate text Extract insights — Pull key points from large documents. Extract insights Brainstorm ideas — Get creative suggestions for projects or writing. Brainstorm ideas Video Tutorial https://youtu.be/qAsGO5N7OCk?embedable=true https://youtu.be/qAsGO5N7OCk?embedable=true Conclusion Using DeepSeek R1 with Obsidian gives you a powerful, private, and offline AI assistant. Give it a try, and let me know your thoughts in the comments below! If you found this guide helpful! DeepSeek R1 Obsidian powerful, private, and offline Cheers!