Download the Langchain-Chatchat with IPEX-LLM integrations from this link. Unzip the content into a directory, e.g. C:\Users\arda\Downloads\Langchain-Chatchat-ipex-llm
.
Visit the Install IPEX-LLM on Windows with Intel GPU Guide, and follow Install Prerequisites to install Visual Studio, GPU driver, and Conda.
Open Anaconda Prompt (miniconda3), and run the following commands to create a new python environment:
conda create -n ipex-llm-langchain-chatchat python=3.11 libuv
conda activate ipex-llm-langchain-chatchat
Note
When creating the conda environment we used python 3.11, which is different from the default recommended python version 3.9 in Install IPEX-LLM on Windows with Intel GPU
pip install dpcpp-cpp-rt==2024.0.2 mkl-dpcpp==2024.0.0 onednn==2024.0.0
pip install --pre --upgrade ipex-llm[xpu] -f https://developer.intel.com/ipex-whl-stable-xpu
pip install --pre --upgrade torchaudio==2.1.0a0 -f https://developer.intel.com/ipex-whl-stable-xpu
- In root directory of Langchain-Chatchat, run the following command to create a config:
python copy_config_example.py
- Edit the file
configs/model_config.py
, changeMODEL_ROOT_PATH
to the absolute path of the parent directory where all the downloaded models (LLMs, embedding models, ranking models, etc.) are stored.
Download the models and place them in the directory MODEL_ROOT_PATH
(refer to details in Configuration section).
Currently, we support only the LLM/embedding models specified in the table below. You can download these models using the link provided in the table. Note: Ensure the model folder name matches the last segment of the model ID following "/", for example, for THUDM/chatglm3-6b
, the model folder name should be chatglm3-6b
.
Model | Category | download link |
---|---|---|
THUDM/chatglm3-6b |
Chinese LLM | HF or ModelScope |
meta-llama/Llama-2-7b-chat-hf |
English LLM | HF |
BAAI/bge-large-zh-v1.5 |
Chinese Embedding | HF |
BAAI/bge-large-en-v1.5 |
English Embedding | HF |
In Anaconda Prompt (miniconda3), under the root directory of Langchain-Chatchat, run the following commands:
conda activate ipex-llm-langchain-chatchat
set USE_XETLA=OFF
set SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
set SYCL_CACHE_PERSISTENT=1
set BIGDL_QUANTIZE_KV_CACHE=1
set BIGDL_LLM_XMX_DISABLED=1
set no_proxy=localhost,127.0.0.1
set BIGDL_IMPORT_IPEX=0
python startup.py -a
Note
The above configurations lead to optimal performance for Intel Arc™ A-Series Graphics with the exception of Intel Arc™ A300-Series or Pro A60.
You can find the Web UI's URL printed on the terminal logs, e.g. http://localhost:8501/.
Open a browser and navigate to the URL to use the Web UI.