Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

blog: add DeepSeek R1 local installation guide #4552

Open
wants to merge 3 commits into
base: dev
Choose a base branch
from

Conversation

eckartal
Copy link
Contributor

Describe Your Changes

  • docs: add DeepSeek R1 local installation guide
  • Add a comprehensive guide for running DeepSeek R1 locally
  • Include step-by-step instructions with screenshots
  • Add VRAM requirements and model selection guide
  • Include system prompt setup instructions

Fixes Issues

  • Closes #
  • Closes #

Self Checklist

  • Added relevant comments, esp in complex areas
  • Updated docs (for bug fixes / features)
  • Created issues for follow-up changes or refactoring needed

louis-menlo and others added 2 commits January 23, 2025 11:42
chore: sync 0.5.14 release into main
- Add comprehensive guide for running DeepSeek R1 locally
- Include step-by-step instructions with screenshots
- Add VRAM requirements and model selection guide
- Include system prompt setup instructions
Copy link
Contributor

github-actions bot commented Jan 31, 2025

Preview URL: https://6d249026.docs-9ba.pages.dev

@@ -0,0 +1,109 @@
---
title: "Beginner's Guide: Run DeepSeek R1 Locally (Private)"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would add (and Privately)


![image](./_assets/run-deepseek-r1-locally-in-jan.jpg)

You can run DeepSeek R1 on your own computer! While the full model needs very powerful hardware, we'll use a smaller version that works great on regular computers.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The flow of this first sentence is a bit off. I would recommend adding something along the lines of: "R1 is one of the best open source models in the market right now, and the best part is that we can run different versions of it on our laptop."

Keep reading for a step-by-step guide with pictures.

## Step 1: Download Jan
[Jan](https://jan.ai/) is a free app that helps you run AI models on your computer. It works on Windows, Mac, and Linux, and it's super easy to use - no coding needed!

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would recommend using words like "straightforward" instead of easy, super easy, or simple.

<Callout type="info">
💡 Not sure how much VRAM your computer has?
- Windows: Press Windows + R, type "dxdiag", press Enter, and click the "Display" tab
- Mac: Click Apple menu > About This Mac > More Info > Graphics/Displays

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about Linux? 🥲

- Mac: Click Apple menu > About This Mac > More Info > Graphics/Displays
</Callout>

Below is a detailed table showing which version you can run based on your computer's VRAM:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would recommend adding a bit of wording on what is a distilled model versus the full one and why these say qwen vs llama. A lot of people won't know and are assuming they are downloading the original one.

@@ -0,0 +1,188 @@
---
title: "How to run AI models locally: A Complete Guide for Beginners"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe "... : A Beginners Guide"? 🤔

Copy link
Contributor

This is the build for this pull request. You can download it from the Artifacts section here: Build URL.

Copy link
Contributor

This is the build for this pull request. You can download it from the Artifacts section here: Build URL.

Copy link
Contributor

This is the build for this pull request. You can download it from the Artifacts section here: Build URL.

Copy link
Contributor

This is the build for this pull request. You can download it from the Artifacts section here: Build URL.

Copy link

@ramonpzg ramonpzg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great work! 🙌

These are suggestions on readability, wording, and context. I hope these are useful.


# How to run AI models locally: A Complete Guide for Beginners

Running AI models locally means installing them on your computer instead of using cloud services. This guide shows you how to run open-source AI models like Llama, Mistral, or DeepSeek on your computer - even if you're not technical.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You don't install them like an app, you download models to be consumed by a program. Maybe rewording this a bit here would be helpful.

I think it would sound much better, "regardless of your background" rather than "even if you're not technical."


## Understanding Local AI models

Think of AI models like apps - some are small and fast, others are bigger but smarter. Let's understand two important terms you'll see often: parameters and quantization.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wouldn't compare them to apps since a lot of people don't know the difference between a GPT-4o and ChatGPT. The app and the model are already the same for them. Maybe the model is the engine and the app is the car chassis?

### 2. Use Hugging Face:

<Callout type="warning">
Important: Only GGUF models will work with Jan. Make sure to use models that have "GGUF" in their name.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be useful to say a sentence or two on what is the GGUF format in the previous section on quantization.

Copy link
Contributor

This is the build for this pull request. You can download it from the Artifacts section here: Build URL.

Copy link
Contributor

This is the build for this pull request. You can download it from the Artifacts section here: Build URL.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants