-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
blog: add DeepSeek R1 local installation guide #4552
base: dev
Are you sure you want to change the base?
Conversation
chore: sync 0.5.14 release into main
- Add comprehensive guide for running DeepSeek R1 locally - Include step-by-step instructions with screenshots - Add VRAM requirements and model selection guide - Include system prompt setup instructions
Preview URL: https://6d249026.docs-9ba.pages.dev |
@@ -0,0 +1,109 @@ | |||
--- | |||
title: "Beginner's Guide: Run DeepSeek R1 Locally (Private)" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would add (and Privately)
|
||
![image](./_assets/run-deepseek-r1-locally-in-jan.jpg) | ||
|
||
You can run DeepSeek R1 on your own computer! While the full model needs very powerful hardware, we'll use a smaller version that works great on regular computers. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The flow of this first sentence is a bit off. I would recommend adding something along the lines of: "R1 is one of the best open source models in the market right now, and the best part is that we can run different versions of it on our laptop."
Keep reading for a step-by-step guide with pictures. | ||
|
||
## Step 1: Download Jan | ||
[Jan](https://jan.ai/) is a free app that helps you run AI models on your computer. It works on Windows, Mac, and Linux, and it's super easy to use - no coding needed! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would recommend using words like "straightforward" instead of easy, super easy, or simple.
<Callout type="info"> | ||
💡 Not sure how much VRAM your computer has? | ||
- Windows: Press Windows + R, type "dxdiag", press Enter, and click the "Display" tab | ||
- Mac: Click Apple menu > About This Mac > More Info > Graphics/Displays |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about Linux? 🥲
- Mac: Click Apple menu > About This Mac > More Info > Graphics/Displays | ||
</Callout> | ||
|
||
Below is a detailed table showing which version you can run based on your computer's VRAM: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would recommend adding a bit of wording on what is a distilled model versus the full one and why these say qwen vs llama. A lot of people won't know and are assuming they are downloading the original one.
@@ -0,0 +1,188 @@ | |||
--- | |||
title: "How to run AI models locally: A Complete Guide for Beginners" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe "... : A Beginners Guide"? 🤔
This is the build for this pull request. You can download it from the Artifacts section here: Build URL. |
This is the build for this pull request. You can download it from the Artifacts section here: Build URL. |
This is the build for this pull request. You can download it from the Artifacts section here: Build URL. |
This is the build for this pull request. You can download it from the Artifacts section here: Build URL. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work! 🙌
These are suggestions on readability, wording, and context. I hope these are useful.
|
||
# How to run AI models locally: A Complete Guide for Beginners | ||
|
||
Running AI models locally means installing them on your computer instead of using cloud services. This guide shows you how to run open-source AI models like Llama, Mistral, or DeepSeek on your computer - even if you're not technical. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You don't install them like an app, you download models to be consumed by a program. Maybe rewording this a bit here would be helpful.
I think it would sound much better, "regardless of your background" rather than "even if you're not technical."
|
||
## Understanding Local AI models | ||
|
||
Think of AI models like apps - some are small and fast, others are bigger but smarter. Let's understand two important terms you'll see often: parameters and quantization. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wouldn't compare them to apps since a lot of people don't know the difference between a GPT-4o and ChatGPT. The app and the model are already the same for them. Maybe the model is the engine and the app is the car chassis?
### 2. Use Hugging Face: | ||
|
||
<Callout type="warning"> | ||
Important: Only GGUF models will work with Jan. Make sure to use models that have "GGUF" in their name. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it would be useful to say a sentence or two on what is the GGUF format in the previous section on quantization.
This is the build for this pull request. You can download it from the Artifacts section here: Build URL. |
This is the build for this pull request. You can download it from the Artifacts section here: Build URL. |
Describe Your Changes
Fixes Issues
Self Checklist