From a644e9409bcd6b6c1807aafabc69d06dd694c2c9 Mon Sep 17 00:00:00 2001 From: Zijie Li Date: Tue, 4 Jun 2024 10:14:02 +0800 Subject: [PATCH] Miniconda/Anaconda -> Miniforge update in examples (#11194) * Change installation address Change former address: "https://docs.conda.io/en/latest/miniconda.html#" to new address: "https://conda-forge.org/download/" for 63 occurrences under python\llm\example * Change Prompt Change "Anaconda Prompt" to "Miniforge Prompt" for 1 occurrence --- .../Advanced-Quantizations/GGUF/README.md | 2 +- .../CPU/HF-Transformers-AutoModels/Model/aquila/README.md | 2 +- .../CPU/HF-Transformers-AutoModels/Model/aquila2/README.md | 2 +- .../CPU/HF-Transformers-AutoModels/Model/chatglm/README.md | 2 +- .../CPU/HF-Transformers-AutoModels/Model/codegemma/README.md | 2 +- .../CPU/HF-Transformers-AutoModels/Model/codeshell/README.md | 2 +- .../CPU/HF-Transformers-AutoModels/Model/deepseek-moe/README.md | 2 +- .../HF-Transformers-AutoModels/Model/distil-whisper/README.md | 2 +- .../CPU/HF-Transformers-AutoModels/Model/flan-t5/README.md | 2 +- .../example/CPU/HF-Transformers-AutoModels/Model/fuyu/README.md | 2 +- .../Model/internlm-xcomposer/README.md | 2 +- .../CPU/HF-Transformers-AutoModels/Model/mistral/README.md | 2 +- .../CPU/HF-Transformers-AutoModels/Model/mixtral/README.md | 2 +- .../CPU/HF-Transformers-AutoModels/Model/phi-1_5/README.md | 2 +- .../CPU/HF-Transformers-AutoModels/Model/phi-2/README.md | 2 +- .../CPU/HF-Transformers-AutoModels/Model/phi-3/README.md | 2 +- .../CPU/HF-Transformers-AutoModels/Model/phixtral/README.md | 2 +- .../CPU/HF-Transformers-AutoModels/Model/qwen-vl/README.md | 2 +- .../CPU/HF-Transformers-AutoModels/Model/replit/README.md | 2 +- .../CPU/HF-Transformers-AutoModels/Model/stablelm/README.md | 2 +- .../example/CPU/HF-Transformers-AutoModels/Model/yi/README.md | 2 +- .../CPU/HF-Transformers-AutoModels/Model/yuan2/README.md | 2 +- .../example/CPU/HF-Transformers-AutoModels/Model/ziya/README.md | 2 +- python/llm/example/CPU/PyTorch-Models/Model/aquila2/README.md | 2 +- python/llm/example/CPU/PyTorch-Models/Model/bark/README.md | 2 +- python/llm/example/CPU/PyTorch-Models/Model/bert/README.md | 2 +- python/llm/example/CPU/PyTorch-Models/Model/bluelm/README.md | 2 +- python/llm/example/CPU/PyTorch-Models/Model/chatglm/README.md | 2 +- python/llm/example/CPU/PyTorch-Models/Model/chatglm3/README.md | 2 +- python/llm/example/CPU/PyTorch-Models/Model/codegemma/README.md | 2 +- python/llm/example/CPU/PyTorch-Models/Model/codellama/README.md | 2 +- python/llm/example/CPU/PyTorch-Models/Model/codeshell/README.md | 2 +- python/llm/example/CPU/PyTorch-Models/Model/deciLM-7b/README.md | 2 +- .../llm/example/CPU/PyTorch-Models/Model/deepseek-moe/README.md | 2 +- python/llm/example/CPU/PyTorch-Models/Model/deepseek/README.md | 2 +- .../example/CPU/PyTorch-Models/Model/distil-whisper/README.md | 2 +- python/llm/example/CPU/PyTorch-Models/Model/flan-t5/README.md | 2 +- python/llm/example/CPU/PyTorch-Models/Model/fuyu/README.md | 2 +- .../CPU/PyTorch-Models/Model/internlm-xcomposer/README.md | 2 +- python/llm/example/CPU/PyTorch-Models/Model/internlm2/README.md | 2 +- python/llm/example/CPU/PyTorch-Models/Model/llama2/README.md | 2 +- python/llm/example/CPU/PyTorch-Models/Model/llama3/README.md | 2 +- python/llm/example/CPU/PyTorch-Models/Model/llava/README.md | 2 +- python/llm/example/CPU/PyTorch-Models/Model/mamba/README.md | 2 +- python/llm/example/CPU/PyTorch-Models/Model/mistral/README.md | 2 +- python/llm/example/CPU/PyTorch-Models/Model/mixtral/README.md | 2 +- python/llm/example/CPU/PyTorch-Models/Model/phi-1_5/README.md | 2 +- python/llm/example/CPU/PyTorch-Models/Model/phi-2/README.md | 2 +- python/llm/example/CPU/PyTorch-Models/Model/phi-3/README.md | 2 +- python/llm/example/CPU/PyTorch-Models/Model/phixtral/README.md | 2 +- python/llm/example/CPU/PyTorch-Models/Model/qwen-vl/README.md | 2 +- python/llm/example/CPU/PyTorch-Models/Model/qwen1.5/README.md | 2 +- python/llm/example/CPU/PyTorch-Models/Model/skywork/README.md | 2 +- python/llm/example/CPU/PyTorch-Models/Model/solar/README.md | 2 +- python/llm/example/CPU/PyTorch-Models/Model/stablelm/README.md | 2 +- .../CPU/PyTorch-Models/Model/wizardcoder-python/README.md | 2 +- python/llm/example/CPU/PyTorch-Models/Model/yi/README.md | 2 +- python/llm/example/CPU/PyTorch-Models/Model/yuan2/README.md | 2 +- python/llm/example/CPU/PyTorch-Models/Model/ziya/README.md | 2 +- python/llm/example/CPU/Speculative-Decoding/EAGLE/README.md | 2 +- .../Advanced-Quantizations/GGUF/README.md | 2 +- .../GPU/HF-Transformers-AutoModels/Model/codegemma/README.md | 2 +- python/llm/example/GPU/PyTorch-Models/Model/codegemma/README.md | 2 +- python/llm/scripts/README.md | 2 +- 64 files changed, 64 insertions(+), 64 deletions(-) diff --git a/python/llm/example/CPU/HF-Transformers-AutoModels/Advanced-Quantizations/GGUF/README.md b/python/llm/example/CPU/HF-Transformers-AutoModels/Advanced-Quantizations/GGUF/README.md index 9d82496ea7d..74c5c7871bd 100644 --- a/python/llm/example/CPU/HF-Transformers-AutoModels/Advanced-Quantizations/GGUF/README.md +++ b/python/llm/example/CPU/HF-Transformers-AutoModels/Advanced-Quantizations/GGUF/README.md @@ -21,7 +21,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y In the example [generate.py](./generate.py), we show a basic use case to load a GGUF LLaMA2 model into `ipex-llm` using `from_gguf()` API, with IPEX-LLM optimizations. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/aquila/README.md b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/aquila/README.md index 93f07a06702..e38cbf6f556 100644 --- a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/aquila/README.md +++ b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/aquila/README.md @@ -12,7 +12,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y ## Example: Predict Tokens using `generate()` API In the example [generate.py](./generate.py), we show a basic use case for a Aquila model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/aquila2/README.md b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/aquila2/README.md index 730e7d4795c..fb7d16872fe 100644 --- a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/aquila2/README.md +++ b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/aquila2/README.md @@ -12,7 +12,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y ## Example: Predict Tokens using `generate()` API In the example [generate.py](./generate.py), we show a basic use case for a Aquila2 model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/chatglm/README.md b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/chatglm/README.md index 1d7006b3617..2813602abaf 100644 --- a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/chatglm/README.md +++ b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/chatglm/README.md @@ -12,7 +12,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y ## Example: Predict Tokens using `generate()` API In the example [generate.py](./generate.py), we show a basic use case for a ChatGLM model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/codegemma/README.md b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/codegemma/README.md index 76b96e9ae2a..500e4027652 100644 --- a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/codegemma/README.md +++ b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/codegemma/README.md @@ -7,7 +7,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y ## Example: Predict Tokens using `generate()` API In the example [generate.py](./generate.py), we show a basic use case for a CodeGemma model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/codeshell/README.md b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/codeshell/README.md index ea5fd312370..b859e801463 100644 --- a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/codeshell/README.md +++ b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/codeshell/README.md @@ -12,7 +12,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y ## Example: Predict Tokens using `generate()` API In the example [generate.py](./generate.py), we show a basic use case for a CodeShell model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/deepseek-moe/README.md b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/deepseek-moe/README.md index ff3fc050c54..0ad5949daf8 100644 --- a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/deepseek-moe/README.md +++ b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/deepseek-moe/README.md @@ -12,7 +12,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y ## Example: Predict Tokens using `generate()` API In the example [generate.py](./generate.py), we show a basic use case for a DeepSeek-MoE model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/distil-whisper/README.md b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/distil-whisper/README.md index 4b57416c4d0..217a149daed 100644 --- a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/distil-whisper/README.md +++ b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/distil-whisper/README.md @@ -8,7 +8,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y ## Example: Recognize Tokens using `generate()` API In the example [recognize.py](./recognize.py), we show a basic use case for a Distil-Whisper model to conduct transcription using `generate()` API, with IPEX-LLM INT4 optimizations. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/flan-t5/README.md b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/flan-t5/README.md index de95d858bd5..15a4b22b431 100644 --- a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/flan-t5/README.md +++ b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/flan-t5/README.md @@ -8,7 +8,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y ## Example: Predict Tokens using `generate()` API In the example [generate.py](./generate.py), we show a basic use case for a Flan-t5 model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/fuyu/README.md b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/fuyu/README.md index 49942b010fc..eeaa969444b 100644 --- a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/fuyu/README.md +++ b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/fuyu/README.md @@ -7,7 +7,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y ## Example: Predict Tokens using `generate()` API In the example [generate.py](./generate.py), we show a basic use case for an Fuyu model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/internlm-xcomposer/README.md b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/internlm-xcomposer/README.md index 3deb7bb21bf..9c1b02b20ee 100644 --- a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/internlm-xcomposer/README.md +++ b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/internlm-xcomposer/README.md @@ -7,7 +7,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y ## Example: Multi-turn chat centered around an image using `chat()` API In the example [chat.py](./chat.py), we show a basic use case for an InternLM_XComposer model to start a multi-turn chat centered around an image using `chat()` API, with IPEX-LLM INT4 optimizations. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/mistral/README.md b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/mistral/README.md index 78abbe27514..49be918ab16 100644 --- a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/mistral/README.md +++ b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/mistral/README.md @@ -9,7 +9,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y ## Example: Predict Tokens using `generate()` API In the example [generate.py](./generate.py), we show a basic use case for a Mistral model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/mixtral/README.md b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/mixtral/README.md index 6514817e5ca..08b4f064d17 100644 --- a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/mixtral/README.md +++ b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/mixtral/README.md @@ -9,7 +9,7 @@ To run these examples with IPEX-LLM on Intel CPUs, we have some recommended requ ## Example: Predict Tokens using `generate()` API In the example [generate.py](./generate.py), we show a basic use case for a Mixtral model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations on Intel CPUs. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/phi-1_5/README.md b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/phi-1_5/README.md index e3c32c740d9..e8ef3b3887e 100644 --- a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/phi-1_5/README.md +++ b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/phi-1_5/README.md @@ -12,7 +12,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y ## Example: Predict Tokens using `generate()` API In the example [generate.py](./generate.py), we show a basic use case for a phi-1_5 model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/phi-2/README.md b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/phi-2/README.md index b211fd9545a..87efda38c60 100644 --- a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/phi-2/README.md +++ b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/phi-2/README.md @@ -12,7 +12,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y ## Example: Predict Tokens using `generate()` API In the example [generate.py](./generate.py), we show a basic use case for a phi-2 model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/phi-3/README.md b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/phi-3/README.md index ff9f870be7d..8f4135ec48a 100644 --- a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/phi-3/README.md +++ b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/phi-3/README.md @@ -12,7 +12,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y ## Example: Predict Tokens using `generate()` API In the example [generate.py](./generate.py), we show a basic use case for a phi-3 model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/phixtral/README.md b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/phixtral/README.md index 76563a99033..1e0d919829a 100644 --- a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/phixtral/README.md +++ b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/phixtral/README.md @@ -12,7 +12,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y ## Example: Predict Tokens using `generate()` API In the example [generate.py](./generate.py), we show a basic use case for a phixtral model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/qwen-vl/README.md b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/qwen-vl/README.md index 777c60df3f3..7dc3dedc5cb 100644 --- a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/qwen-vl/README.md +++ b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/qwen-vl/README.md @@ -7,7 +7,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y ## Example: Multimodal chat using `chat()` API In the example [chat.py](./chat.py), we show a basic use case for a Qwen-VL model to start a multimodal chat using `chat()` API, with IPEX-LLM INT4 optimizations. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/replit/README.md b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/replit/README.md index 7a973a18446..558ee244acb 100644 --- a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/replit/README.md +++ b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/replit/README.md @@ -7,7 +7,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y ## Example: Predict Tokens using `generate()` API In the example [generate.py](./generate.py), we show a basic use case for an Replit model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/stablelm/README.md b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/stablelm/README.md index 64436a4ac83..08aed2a20e3 100644 --- a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/stablelm/README.md +++ b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/stablelm/README.md @@ -7,7 +7,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y ## Example: Predict Tokens using `generate()` API In the example [generate.py](./generate.py), we show a basic use case for a StableLM model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/yi/README.md b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/yi/README.md index b3ea29d4457..70c9d662f92 100644 --- a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/yi/README.md +++ b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/yi/README.md @@ -7,7 +7,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y ## Example: Predict Tokens using `generate()` API In the example [generate.py](./generate.py), we show a basic use case for an Yi model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/yuan2/README.md b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/yuan2/README.md index d39cdf5bf52..fb5c358b502 100644 --- a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/yuan2/README.md +++ b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/yuan2/README.md @@ -9,7 +9,7 @@ In addition, you need to modify some files in Yuan2-2B-hf folder, since Flash at ## Example: Predict Tokens using `generate()` API In the example [generate.py](./generate.py), we show a basic use case for an Yuan2 model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/ziya/README.md b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/ziya/README.md index ff56fba8783..1582f794d97 100644 --- a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/ziya/README.md +++ b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/ziya/README.md @@ -12,7 +12,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y ## Example: Predict Tokens using `generate()` API In the example [generate.py](./generate.py), we show a basic use case for a Ziya model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/PyTorch-Models/Model/aquila2/README.md b/python/llm/example/CPU/PyTorch-Models/Model/aquila2/README.md index 50526cb78bc..0bbefe31b41 100644 --- a/python/llm/example/CPU/PyTorch-Models/Model/aquila2/README.md +++ b/python/llm/example/CPU/PyTorch-Models/Model/aquila2/README.md @@ -7,7 +7,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y ## Example: Predict Tokens using `generate()` API In the example [generate.py](./generate.py), we show a basic use case for a Aquila2 model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/PyTorch-Models/Model/bark/README.md b/python/llm/example/CPU/PyTorch-Models/Model/bark/README.md index e014a3e1524..971c6a759a6 100644 --- a/python/llm/example/CPU/PyTorch-Models/Model/bark/README.md +++ b/python/llm/example/CPU/PyTorch-Models/Model/bark/README.md @@ -7,7 +7,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y ## Example: Synthesize speech with the given input text In the example [synthesize_speech.py](./synthesize_speech.py), we show a basic use case for Bark model to synthesize speech based on the given text, with IPEX-LLM INT4 optimizations. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/PyTorch-Models/Model/bert/README.md b/python/llm/example/CPU/PyTorch-Models/Model/bert/README.md index bf9eee36715..66ed443807f 100644 --- a/python/llm/example/CPU/PyTorch-Models/Model/bert/README.md +++ b/python/llm/example/CPU/PyTorch-Models/Model/bert/README.md @@ -7,7 +7,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y ## Example: Extract the feature of given text In the example [extract_feature.py](./extract_feature.py), we show a basic use case for a BERT model to extract the feature of given text, with IPEX-LLM INT4 optimizations. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/PyTorch-Models/Model/bluelm/README.md b/python/llm/example/CPU/PyTorch-Models/Model/bluelm/README.md index e6f33415083..98495b3c644 100644 --- a/python/llm/example/CPU/PyTorch-Models/Model/bluelm/README.md +++ b/python/llm/example/CPU/PyTorch-Models/Model/bluelm/README.md @@ -7,7 +7,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y ## Example: Predict Tokens using `generate()` API In the example [generate.py](./generate.py), we show a basic use case for a BlueLM model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/PyTorch-Models/Model/chatglm/README.md b/python/llm/example/CPU/PyTorch-Models/Model/chatglm/README.md index a387980dc44..b6b53991ffc 100644 --- a/python/llm/example/CPU/PyTorch-Models/Model/chatglm/README.md +++ b/python/llm/example/CPU/PyTorch-Models/Model/chatglm/README.md @@ -7,7 +7,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y ## Example: Predict Tokens using `generate()` API In the example [generate.py](./generate.py), we show a basic use case for a ChatGLM model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/PyTorch-Models/Model/chatglm3/README.md b/python/llm/example/CPU/PyTorch-Models/Model/chatglm3/README.md index 736e3ce79b6..43e18acf1f0 100644 --- a/python/llm/example/CPU/PyTorch-Models/Model/chatglm3/README.md +++ b/python/llm/example/CPU/PyTorch-Models/Model/chatglm3/README.md @@ -7,7 +7,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y ## Example: Predict Tokens using `generate()` API In the example [generate.py](./generate.py), we show a basic use case for a ChatGLM3 model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/PyTorch-Models/Model/codegemma/README.md b/python/llm/example/CPU/PyTorch-Models/Model/codegemma/README.md index d0edbf9465b..81831376da7 100644 --- a/python/llm/example/CPU/PyTorch-Models/Model/codegemma/README.md +++ b/python/llm/example/CPU/PyTorch-Models/Model/codegemma/README.md @@ -7,7 +7,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y ## Example: Predict Tokens using `generate()` API In the example [generate.py](./generate.py), we show a basic use case for a CodeGemma model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/PyTorch-Models/Model/codellama/README.md b/python/llm/example/CPU/PyTorch-Models/Model/codellama/README.md index 8504713b378..7705f2d78ca 100644 --- a/python/llm/example/CPU/PyTorch-Models/Model/codellama/README.md +++ b/python/llm/example/CPU/PyTorch-Models/Model/codellama/README.md @@ -7,7 +7,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y ## Example: Predict Tokens using `generate()` API In the example [generate.py](./generate.py), we show a basic use case for a CodeLlama model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/PyTorch-Models/Model/codeshell/README.md b/python/llm/example/CPU/PyTorch-Models/Model/codeshell/README.md index 2b9b9c1dec1..2ad00dad793 100644 --- a/python/llm/example/CPU/PyTorch-Models/Model/codeshell/README.md +++ b/python/llm/example/CPU/PyTorch-Models/Model/codeshell/README.md @@ -7,7 +7,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y ## Example: Predict Tokens using `generate()` API In the example [generate.py](./generate.py), we show a basic use case for a CodeShell model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/PyTorch-Models/Model/deciLM-7b/README.md b/python/llm/example/CPU/PyTorch-Models/Model/deciLM-7b/README.md index 62c89a57bad..7ed2d846f49 100644 --- a/python/llm/example/CPU/PyTorch-Models/Model/deciLM-7b/README.md +++ b/python/llm/example/CPU/PyTorch-Models/Model/deciLM-7b/README.md @@ -7,7 +7,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y ## Example: Predict Tokens using `generate()` API In the example [generate.py](./generate.py), we show a basic use case for a DeciLM-7B model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/PyTorch-Models/Model/deepseek-moe/README.md b/python/llm/example/CPU/PyTorch-Models/Model/deepseek-moe/README.md index e0ed005931d..f7e2035d7f2 100644 --- a/python/llm/example/CPU/PyTorch-Models/Model/deepseek-moe/README.md +++ b/python/llm/example/CPU/PyTorch-Models/Model/deepseek-moe/README.md @@ -7,7 +7,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y ## Example: Predict Tokens using `generate()` API In the example [generate.py](./generate.py), we show a basic use case for a deepseek-moe model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/PyTorch-Models/Model/deepseek/README.md b/python/llm/example/CPU/PyTorch-Models/Model/deepseek/README.md index 88315963df5..0de0dc8d3c1 100644 --- a/python/llm/example/CPU/PyTorch-Models/Model/deepseek/README.md +++ b/python/llm/example/CPU/PyTorch-Models/Model/deepseek/README.md @@ -7,7 +7,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y ## Example: Predict Tokens using `generate()` API In the example [generate.py](./generate.py), we show a basic use case for a Deepseek model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/PyTorch-Models/Model/distil-whisper/README.md b/python/llm/example/CPU/PyTorch-Models/Model/distil-whisper/README.md index 2ab17e17c83..35166a6d9d2 100644 --- a/python/llm/example/CPU/PyTorch-Models/Model/distil-whisper/README.md +++ b/python/llm/example/CPU/PyTorch-Models/Model/distil-whisper/README.md @@ -8,7 +8,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y ## Example: Recognize Tokens using `generate()` API In the example [recognize.py](./recognize.py), we show a basic use case for a Distil-Whisper model to conduct transcription using `generate()` API, with IPEX-LLM INT4 optimizations. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/PyTorch-Models/Model/flan-t5/README.md b/python/llm/example/CPU/PyTorch-Models/Model/flan-t5/README.md index de95d858bd5..15a4b22b431 100644 --- a/python/llm/example/CPU/PyTorch-Models/Model/flan-t5/README.md +++ b/python/llm/example/CPU/PyTorch-Models/Model/flan-t5/README.md @@ -8,7 +8,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y ## Example: Predict Tokens using `generate()` API In the example [generate.py](./generate.py), we show a basic use case for a Flan-t5 model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/PyTorch-Models/Model/fuyu/README.md b/python/llm/example/CPU/PyTorch-Models/Model/fuyu/README.md index 84de78355bf..ee9c40431fd 100644 --- a/python/llm/example/CPU/PyTorch-Models/Model/fuyu/README.md +++ b/python/llm/example/CPU/PyTorch-Models/Model/fuyu/README.md @@ -7,7 +7,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y ## Example: Predict Tokens using `generate()` API In the example [generate.py](./generate.py), we show a basic use case for an Fuyu model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/PyTorch-Models/Model/internlm-xcomposer/README.md b/python/llm/example/CPU/PyTorch-Models/Model/internlm-xcomposer/README.md index bc27022fd77..1f0775e17e6 100644 --- a/python/llm/example/CPU/PyTorch-Models/Model/internlm-xcomposer/README.md +++ b/python/llm/example/CPU/PyTorch-Models/Model/internlm-xcomposer/README.md @@ -7,7 +7,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y ## Example: Multi-turn chat centered around an image using `chat()` API In the example [chat.py](./chat.py), we show a basic use case for an InternLM_XComposer model to start a multi-turn chat centered around an image using `chat()` API, with IPEX-LLM 'optimize_model' API. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/PyTorch-Models/Model/internlm2/README.md b/python/llm/example/CPU/PyTorch-Models/Model/internlm2/README.md index 024132700cb..c3588a15c09 100644 --- a/python/llm/example/CPU/PyTorch-Models/Model/internlm2/README.md +++ b/python/llm/example/CPU/PyTorch-Models/Model/internlm2/README.md @@ -7,7 +7,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y ## Example: Predict Tokens using `generate()` API In the example [generate.py](./generate.py), we show a basic use case for a InternLM2 model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/PyTorch-Models/Model/llama2/README.md b/python/llm/example/CPU/PyTorch-Models/Model/llama2/README.md index 2d56b03fb68..bd9083cc6da 100644 --- a/python/llm/example/CPU/PyTorch-Models/Model/llama2/README.md +++ b/python/llm/example/CPU/PyTorch-Models/Model/llama2/README.md @@ -7,7 +7,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y ## Example: Predict Tokens using `generate()` API In the example [generate.py](./generate.py), we show a basic use case for a Llama2 model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/PyTorch-Models/Model/llama3/README.md b/python/llm/example/CPU/PyTorch-Models/Model/llama3/README.md index f50a7ebf5ae..f518d8873b4 100644 --- a/python/llm/example/CPU/PyTorch-Models/Model/llama3/README.md +++ b/python/llm/example/CPU/PyTorch-Models/Model/llama3/README.md @@ -7,7 +7,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y ## Example: Predict Tokens using `generate()` API In the example [generate.py](./generate.py), we show a basic use case for a Llama3 model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/PyTorch-Models/Model/llava/README.md b/python/llm/example/CPU/PyTorch-Models/Model/llava/README.md index 08ea2c0e804..aa44bf44e44 100644 --- a/python/llm/example/CPU/PyTorch-Models/Model/llava/README.md +++ b/python/llm/example/CPU/PyTorch-Models/Model/llava/README.md @@ -8,7 +8,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y ## Example: Multi-turn chat centered around an image using `generate()` API In the example [generate.py](./generate.py), we show a basic use case for a LLaVA model to start a multi-turn chat centered around an image using `generate()` API, with IPEX-LLM INT4 optimizations. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/PyTorch-Models/Model/mamba/README.md b/python/llm/example/CPU/PyTorch-Models/Model/mamba/README.md index bd47c7b2f90..37a4594989a 100644 --- a/python/llm/example/CPU/PyTorch-Models/Model/mamba/README.md +++ b/python/llm/example/CPU/PyTorch-Models/Model/mamba/README.md @@ -7,7 +7,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y ## Example: Predict Tokens using `generate()` API In the example [generate.py](./generate.py), we show a basic use case for a Mamba model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/PyTorch-Models/Model/mistral/README.md b/python/llm/example/CPU/PyTorch-Models/Model/mistral/README.md index e058a716eec..c3b3227a837 100644 --- a/python/llm/example/CPU/PyTorch-Models/Model/mistral/README.md +++ b/python/llm/example/CPU/PyTorch-Models/Model/mistral/README.md @@ -9,7 +9,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y ## Example: Predict Tokens using `generate()` API In the example [generate.py](./generate.py), we show a basic use case for a Mistral model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/PyTorch-Models/Model/mixtral/README.md b/python/llm/example/CPU/PyTorch-Models/Model/mixtral/README.md index 6bbcc00841a..86253049875 100644 --- a/python/llm/example/CPU/PyTorch-Models/Model/mixtral/README.md +++ b/python/llm/example/CPU/PyTorch-Models/Model/mixtral/README.md @@ -9,7 +9,7 @@ To run these examples with IPEX-LLM on Intel CPUs, we have some recommended requ ## Example: Predict Tokens using `generate()` API In the example [generate.py](./generate.py), we show a basic use case for a Mixtral model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations on Intel CPUs. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/PyTorch-Models/Model/phi-1_5/README.md b/python/llm/example/CPU/PyTorch-Models/Model/phi-1_5/README.md index 65be1ecae69..f006af538ce 100644 --- a/python/llm/example/CPU/PyTorch-Models/Model/phi-1_5/README.md +++ b/python/llm/example/CPU/PyTorch-Models/Model/phi-1_5/README.md @@ -7,7 +7,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y ## Example: Predict Tokens using `generate()` API In the example [generate.py](./generate.py), we show a basic use case for a phi-1_5 model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/PyTorch-Models/Model/phi-2/README.md b/python/llm/example/CPU/PyTorch-Models/Model/phi-2/README.md index 2320490d03f..0ce86773a27 100644 --- a/python/llm/example/CPU/PyTorch-Models/Model/phi-2/README.md +++ b/python/llm/example/CPU/PyTorch-Models/Model/phi-2/README.md @@ -7,7 +7,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y ## Example: Predict Tokens using `generate()` API In the example [generate.py](./generate.py), we show a basic use case for a phi-2 model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/PyTorch-Models/Model/phi-3/README.md b/python/llm/example/CPU/PyTorch-Models/Model/phi-3/README.md index 66b9eac9beb..d20b271ce9d 100644 --- a/python/llm/example/CPU/PyTorch-Models/Model/phi-3/README.md +++ b/python/llm/example/CPU/PyTorch-Models/Model/phi-3/README.md @@ -12,7 +12,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y ## Example: Predict Tokens using `generate()` API In the example [generate.py](./generate.py), we show a basic use case for a phi-3 model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/PyTorch-Models/Model/phixtral/README.md b/python/llm/example/CPU/PyTorch-Models/Model/phixtral/README.md index 3daadbadb7a..b1d1f0d8da6 100644 --- a/python/llm/example/CPU/PyTorch-Models/Model/phixtral/README.md +++ b/python/llm/example/CPU/PyTorch-Models/Model/phixtral/README.md @@ -7,7 +7,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y ## Example: Predict Tokens using `generate()` API In the example [generate.py](./generate.py), we show a basic use case for a phixtral model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/PyTorch-Models/Model/qwen-vl/README.md b/python/llm/example/CPU/PyTorch-Models/Model/qwen-vl/README.md index b28d49e60b4..25744465c26 100644 --- a/python/llm/example/CPU/PyTorch-Models/Model/qwen-vl/README.md +++ b/python/llm/example/CPU/PyTorch-Models/Model/qwen-vl/README.md @@ -7,7 +7,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y ## Example: Multimodal chat using `chat()` API In the example [chat.py](./chat.py), we show a basic use case for a Qwen-VL model to start a multimodal chat using `chat()` API, with IPEX-LLM 'optimize_model' API. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/PyTorch-Models/Model/qwen1.5/README.md b/python/llm/example/CPU/PyTorch-Models/Model/qwen1.5/README.md index 7841702b92f..09ce24ec3e3 100644 --- a/python/llm/example/CPU/PyTorch-Models/Model/qwen1.5/README.md +++ b/python/llm/example/CPU/PyTorch-Models/Model/qwen1.5/README.md @@ -7,7 +7,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y ## Example: Predict Tokens using `generate()` API In the example [generate.py](./generate.py), we show a basic use case for a Qwen1.5 model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/PyTorch-Models/Model/skywork/README.md b/python/llm/example/CPU/PyTorch-Models/Model/skywork/README.md index 71277e69ec3..65afec706bb 100644 --- a/python/llm/example/CPU/PyTorch-Models/Model/skywork/README.md +++ b/python/llm/example/CPU/PyTorch-Models/Model/skywork/README.md @@ -7,7 +7,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y ## Example: Predict Tokens using `generate()` API In the example [generate.py](./generate.py), we show a basic use case for a Skywork model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/PyTorch-Models/Model/solar/README.md b/python/llm/example/CPU/PyTorch-Models/Model/solar/README.md index 89bea91ec27..1d172795619 100644 --- a/python/llm/example/CPU/PyTorch-Models/Model/solar/README.md +++ b/python/llm/example/CPU/PyTorch-Models/Model/solar/README.md @@ -7,7 +7,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y ## Example: Predict Tokens using `generate()` API In the example [generate.py](./generate.py), we show a basic use case for a SOLAR model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/PyTorch-Models/Model/stablelm/README.md b/python/llm/example/CPU/PyTorch-Models/Model/stablelm/README.md index d2a44a255c1..3166340ff69 100644 --- a/python/llm/example/CPU/PyTorch-Models/Model/stablelm/README.md +++ b/python/llm/example/CPU/PyTorch-Models/Model/stablelm/README.md @@ -7,7 +7,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y ## Example: Predict Tokens using `generate()` API In the example [generate.py](./generate.py), we show a basic use case for a StableLM model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/PyTorch-Models/Model/wizardcoder-python/README.md b/python/llm/example/CPU/PyTorch-Models/Model/wizardcoder-python/README.md index 49f903c58be..2e266f5fa6f 100644 --- a/python/llm/example/CPU/PyTorch-Models/Model/wizardcoder-python/README.md +++ b/python/llm/example/CPU/PyTorch-Models/Model/wizardcoder-python/README.md @@ -7,7 +7,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y ## Example: Predict Tokens using `generate()` API In the example [generate.py](./generate.py), we show a basic use case for a WizardCoder-Python model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/PyTorch-Models/Model/yi/README.md b/python/llm/example/CPU/PyTorch-Models/Model/yi/README.md index c7eb8f27599..d96e5a13757 100644 --- a/python/llm/example/CPU/PyTorch-Models/Model/yi/README.md +++ b/python/llm/example/CPU/PyTorch-Models/Model/yi/README.md @@ -8,7 +8,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y ## Example: Predict Tokens using `generate()` API In the example [generate.py](./generate.py), we show a basic use case for a Yi model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/PyTorch-Models/Model/yuan2/README.md b/python/llm/example/CPU/PyTorch-Models/Model/yuan2/README.md index 403abc0548b..40737fce0e8 100644 --- a/python/llm/example/CPU/PyTorch-Models/Model/yuan2/README.md +++ b/python/llm/example/CPU/PyTorch-Models/Model/yuan2/README.md @@ -9,7 +9,7 @@ In addition, you need to modify some files in Yuan2-2B-hf folder, since Flash at ## Example: Predict Tokens using `generate()` API In the example [generate.py](./generate.py), we show a basic use case for an Yuan2 model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/PyTorch-Models/Model/ziya/README.md b/python/llm/example/CPU/PyTorch-Models/Model/ziya/README.md index ea43f9d3653..84544d5efb9 100644 --- a/python/llm/example/CPU/PyTorch-Models/Model/ziya/README.md +++ b/python/llm/example/CPU/PyTorch-Models/Model/ziya/README.md @@ -7,7 +7,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y ## Example: Predict Tokens using `generate()` API In the example [generate.py](./generate.py), we show a basic use case for a Ziya model to predict the next N tokens using `generate()` API, with IPEX-LLM 'optimize_model' API. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: diff --git a/python/llm/example/CPU/Speculative-Decoding/EAGLE/README.md b/python/llm/example/CPU/Speculative-Decoding/EAGLE/README.md index f51c9ac349b..da768bb99f1 100644 --- a/python/llm/example/CPU/Speculative-Decoding/EAGLE/README.md +++ b/python/llm/example/CPU/Speculative-Decoding/EAGLE/README.md @@ -8,7 +8,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y In this example, we run inference for a Llama2 model to showcase the speed of EAGLE with IPEX-LLM on MT-bench data on Intel CPUs. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: ```bash diff --git a/python/llm/example/GPU/HF-Transformers-AutoModels/Advanced-Quantizations/GGUF/README.md b/python/llm/example/GPU/HF-Transformers-AutoModels/Advanced-Quantizations/GGUF/README.md index a979d5f6051..372f0a1f1ad 100644 --- a/python/llm/example/GPU/HF-Transformers-AutoModels/Advanced-Quantizations/GGUF/README.md +++ b/python/llm/example/GPU/HF-Transformers-AutoModels/Advanced-Quantizations/GGUF/README.md @@ -19,7 +19,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y In the example [generate.py](./generate.py), we show a basic use case to load a GGUF LLaMA2 model into `ipex-llm` using `from_gguf()` API, with IPEX-LLM optimizations. ### 1. Install -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: ```bash diff --git a/python/llm/example/GPU/HF-Transformers-AutoModels/Model/codegemma/README.md b/python/llm/example/GPU/HF-Transformers-AutoModels/Model/codegemma/README.md index b0564824dda..96a0b804edb 100644 --- a/python/llm/example/GPU/HF-Transformers-AutoModels/Model/codegemma/README.md +++ b/python/llm/example/GPU/HF-Transformers-AutoModels/Model/codegemma/README.md @@ -10,7 +10,7 @@ To run these examples with IPEX-LLM on Intel GPUs, we have some recommended requ In the example [generate.py](./generate.py), we show a basic use case for a CodeGemma model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations on Intel GPUs. ### 1. Install #### 1.1 Installation on Linux -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: ```bash diff --git a/python/llm/example/GPU/PyTorch-Models/Model/codegemma/README.md b/python/llm/example/GPU/PyTorch-Models/Model/codegemma/README.md index df37bf837e8..1c145defdff 100644 --- a/python/llm/example/GPU/PyTorch-Models/Model/codegemma/README.md +++ b/python/llm/example/GPU/PyTorch-Models/Model/codegemma/README.md @@ -10,7 +10,7 @@ To run these examples with IPEX-LLM on Intel GPUs, we have some recommended requ In the example [generate.py](./generate.py), we show a basic use case for a CodeGemma model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations on Intel GPUs. ### 1. Install #### 1.1 Installation on Linux -We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). +We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/). After installing conda, create a Python environment for IPEX-LLM: ```bash diff --git a/python/llm/scripts/README.md b/python/llm/scripts/README.md index 20724652f2b..72cd9fe378c 100644 --- a/python/llm/scripts/README.md +++ b/python/llm/scripts/README.md @@ -17,7 +17,7 @@ sudo apt install xpu-smi ### Usage -* After installing `ipex-llm`, open a terminal (on Linux) or **Anaconda Prompt** (on Windows), and activate the conda environment you have created for running `ipex-llm`: +* After installing `ipex-llm`, open a terminal (on Linux) or **Miniforge Prompt** (on Windows), and activate the conda environment you have created for running `ipex-llm`: ``` conda activate llm ```