Note
We are adding more Quickstart guides.
This section includes efficient guide to show you how to:
bigdl-llm
Migration Guide- Install IPEX-LLM on Linux with Intel GPU
- Install IPEX-LLM on Windows with Intel GPU
- Run IPEX-LLM on Intel NPU
- Run Performance Benchmarking with IPEX-LLM
- Run Local RAG using Langchain-Chatchat on Intel GPU
- Run Text Generation WebUI on Intel GPU
- Run Open WebUI on Intel GPU
- Run PrivateGPT with IPEX-LLM on Intel GPU
- Run Coding Copilot (Continue) in VSCode with Intel GPU
- Run Dify on Intel GPU
- Run llama.cpp with IPEX-LLM on Intel GPU
- Run Ollama with IPEX-LLM on Intel GPU
- Run Ollama Portable Zip on Intel GPU with IPEX-LLM
- Run llama.cpp Portable Zip on Intel NPU with IPEX-LLM
- Run Llama 3 on Intel GPU using llama.cpp and ollama with IPEX-LLM
- Run RAGFlow with IPEX-LLM on Intel GPU
- Run GraphRAG with IPEX-LLM on Intel GPU