Running DeepSeek on Your Local Machine: Complete Setup Tutorial

 Running DeepSeek on Your Local Machine: Complete Setup Tutorial

What Does Running an LLM Locally Mean?

Running an LLM (Large Language Model) locally means setting up the model to run directly on your computer, without needing an internet connection. This way, you can interact with the model and process data entirely offline, which is perfect for situations where internet access might be limited or unavailable.

For example, imagine you’re in a remote area, or you’re simply avoiding online usage due to security or privacy concerns. With DeepSeek, you can still use the model and get all its functionalities without relying on a cloud server.

Device Compatibility

In this tutorial, we will run the DeepSeek 1.5B model, which is a lightweight version designed to run efficiently on a standard CPU. This model requires a minimum of 4GB RAM, making it accessible even on basic computers without high-end hardware.

Understanding Model Sizes (B in Model Names)

The “B” in 1.5B refers to the number of billion parameters the model is trained on. Parameters are the fundamental building blocks of an AI model, determining its ability to understand and generate text.

Here’s a quick comparison:

  • DeepSeek 1.5B → 1.5 billion parameters (runs on a CPU with 4GB+ RAM)
  • DeepSeek 7B → 7 billion parameters (requires more RAM and a better CPU/GPU)
  • DeepSeek 14B → 14 billion parameters (needs a strong GPU and high RAM, preferably 16GB+)

For this tutorial, since we are focusing on DeepSeek 1.5B, you won’t need a powerful GPU — just a basic CPU with 4GB or more RAM will work fine!

Setting Up DeepSeek Locally

Step 1: Download Ollama

To run DeepSeek locally, you first need to download Ollama, a tool that allows you to run AI models on your computer.

Step 2: Install Ollama

Once the download is complete, follow these steps to install Ollama:

  • Open the downloaded setup file.
  • Follow the installation instructions on your screen.

After installation, Ollama will be ready to use, and we can move on to setting up DeepSeek! 🚀

Step 3: Pull and Run DeepSeek 1.5B

Now that Ollama is installed, it’s time to download and run the DeepSeek 1.5B model.

  • Open your terminal (Command Prompt, PowerShell, or Terminal on Mac/Linux).
  • Run the following command:

ollama run deepseek-r1:1.5b

  • This will start pulling the DeepSeek 1.5B model. Since the model size is 1.1GB, the download time will depend on your internet speed.

If Running Other DeepSeek Models

If you want to run a different DeepSeek model (e.g., 7B or 14B), visit this link.

  • Choose the model you want.
  • You will find a command next to it — run that in your terminal to pull and execute the selected model.

Interacting with the Model

Once the download is complete:

  • You will see a query prompt in the terminal.
  • Type your question, press Enter, and the model will generate a response.

Exiting the Model

To exit the running model, simply press: Ctrl + d

Conclusion

Running DeepSeek 1.5B locally with Ollama is a simple and efficient way to use an AI model without needing an internet connection. This setup is great for privacy, reliability, and offline accessibility.

Key Takeaways:
✔ Lightweight Model — DeepSeek 1.5B runs on a CPU with 4GB+ RAM, making it accessible to most users.
✔ Offline Functionality — No internet is required once the model is downloaded.
✔ Scalability — You can run larger models like 7B or 14B if you have the necessary hardware.
✔ Easy Setup — Just install Ollama, pull the model, and start chatting!

Now that you have DeepSeek running locally, you can experiment with different prompts and explore its capabilities. 🚀

Comments

Popular posts from this blog

RAG Explained Simply: How AI Finds and Generates Better Answers

How LLMs Understand Text — From Tokens to Meaning (Beginner-Friendly)