From 0d07d34cbc55904c3edf7d426b382d391895eee3 Mon Sep 17 00:00:00 2001
From: Jin Qiao <89779290+JinBridger@users.noreply.github.com>
Date: Wed, 7 Feb 2024 16:58:29 +0800
Subject: [PATCH] LLM: add rwkv5 eagle GPU HF example (#10122)

* LLM: add rwkv5 eagle example

* fix

* fix link
---
 README.md                                     |   1 +
 python/llm/README.md                          |   1 +
 .../Model/rwkv5/README.md                     | 133 ++++++++++++++++++
 .../Model/rwkv5/generate.py                   |  85 +++++++++++
 4 files changed, 220 insertions(+)
 create mode 100644 python/llm/example/GPU/HF-Transformers-AutoModels/Model/rwkv5/README.md
 create mode 100644 python/llm/example/GPU/HF-Transformers-AutoModels/Model/rwkv5/generate.py

diff --git a/README.md b/README.md
index c63b85dbad2..cf553c0091e 100644
--- a/README.md
+++ b/README.md
@@ -182,6 +182,7 @@ Over 20 models have been optimized/verified on `bigdl-llm`, including *LLaMA/LLa
 | Phixtral | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/phixtral) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/phixtral) |
 | InternLM2 | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/internlm2) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/internlm2) |
 | RWKV4 |  | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/rwkv4) |
+| RWKV5 |  | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/rwkv5) |
 | Bark | [link](python/llm/example/CPU/PyTorch-Models/Model/bark) | [link](python/llm/example/GPU/PyTorch-Models/Model/bark) |
 | SpeechT5 |  | [link](python/llm/example/GPU/PyTorch-Models/Model/speech-t5) |
 
diff --git a/python/llm/README.md b/python/llm/README.md
index 8e87319144d..be38d2a0892 100644
--- a/python/llm/README.md
+++ b/python/llm/README.md
@@ -78,6 +78,7 @@ Over 20 models have been optimized/verified on `bigdl-llm`, including *LLaMA/LLa
 | Phixtral | [link](example/CPU/HF-Transformers-AutoModels/Model/phixtral) | [link](example/GPU/HF-Transformers-AutoModels/Model/phixtral) |
 | InternLM2 | [link](example/CPU/HF-Transformers-AutoModels/Model/internlm2) | [link](example/GPU/HF-Transformers-AutoModels/Model/internlm2) |
 | RWKV4 |  | [link](example/GPU/HF-Transformers-AutoModels/Model/rwkv4) |
+| RWKV5 |  | [link](example/GPU/HF-Transformers-AutoModels/Model/rwkv5) |
 | Bark | [link](example/CPU/PyTorch-Models/Model/bark) | [link](example/GPU/PyTorch-Models/Model/bark) |
 | SpeechT5 |  | [link](example/GPU/PyTorch-Models/Model/speech-t5) |
 
diff --git a/python/llm/example/GPU/HF-Transformers-AutoModels/Model/rwkv5/README.md b/python/llm/example/GPU/HF-Transformers-AutoModels/Model/rwkv5/README.md
new file mode 100644
index 00000000000..bd78ecc6f33
--- /dev/null
+++ b/python/llm/example/GPU/HF-Transformers-AutoModels/Model/rwkv5/README.md
@@ -0,0 +1,133 @@
+# RWKV5
+
+In this directory, you will find examples on how you could apply BigDL-LLM INT4 optimizations on RWKV5 models on [Intel GPUs](../../../README.md). For illustration purposes, we utilize the [RWKV/HF_v5-Eagle-7B](https://huggingface.co/RWKV/HF_v5-Eagle-7B) as a reference RWKV5 model.
+
+## 0. Requirements
+To run these examples with BigDL-LLM on Intel GPUs, we have some recommended requirements for your machine, please refer to [here](../../../README.md#requirements) for more information.
+
+## Example 1: Predict Tokens using `generate()` API
+In the example [generate.py](./generate.py), we show a basic use case for a RWKV5 model to predict the next N tokens using `generate()` API, with BigDL-LLM INT4 optimizations on Intel GPUs.
+
+### 1. Install
+#### 1.1 Installation on Linux
+We suggest using conda to manage environment:
+```bash
+conda create -n llm python=3.9
+conda activate llm
+# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
+pip install --pre --upgrade bigdl-llm[xpu] -f https://developer.intel.com/ipex-whl-stable-xpu
+```
+#### 1.2 Installation on Windows
+We suggest using conda to manage environment:
+```bash
+conda create -n llm python=3.9 libuv
+conda activate llm
+# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
+pip install --pre --upgrade bigdl-llm[xpu] -f https://developer.intel.com/ipex-whl-stable-xpu
+```
+
+### 2. Configures OneAPI environment variables
+#### 2.1 Configurations for Linux
+```bash
+source /opt/intel/oneapi/setvars.sh
+```
+#### 2.2 Configurations for Windows
+```cmd
+call "C:\Program Files (x86)\Intel\oneAPI\setvars.bat"
+```
+> Note: Please make sure you are using **CMD** (**Anaconda Prompt** if using conda) to run the command as PowerShell is not supported.
+### 3. Runtime Configurations
+For optimal performance, it is recommended to set several environment variables. Please check out the suggestions based on your device.
+#### 3.1 Configurations for Linux
+<details>
+
+<summary>For Intel Arc™ A-Series Graphics and Intel Data Center GPU Flex Series</summary>
+
+```bash
+export USE_XETLA=OFF
+export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
+```
+
+</details>
+
+<details>
+
+<summary>For Intel Data Center GPU Max Series</summary>
+
+```bash
+export LD_PRELOAD=${LD_PRELOAD}:${CONDA_PREFIX}/lib/libtcmalloc.so
+export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
+export ENABLE_SDP_FUSION=1
+```
+> Note: Please note that `libtcmalloc.so` can be installed by `conda install -c conda-forge -y gperftools=2.10`.
+</details>
+#### 3.2 Configurations for Windows
+<details>
+
+<summary>For Intel iGPU</summary>
+
+```cmd
+set SYCL_CACHE_PERSISTENT=1
+set BIGDL_LLM_XMX_DISABLED=1
+```
+
+</details>
+
+<details>
+
+<summary>For Intel Arc™ A300-Series or Pro A60</summary>
+
+```cmd
+set SYCL_CACHE_PERSISTENT=1
+```
+
+</details>
+
+<details>
+
+<summary>For other Intel dGPU Series</summary>
+
+There is no need to set further environment variables.
+
+</details>
+
+> Note: For the first time that each model runs on Intel iGPU/Intel Arc™ A300-Series or Pro A60, it may take several minutes to compile.
+### 4. Running examples
+```
+python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROMPT --n-predict N_PREDICT
+```
+
+Arguments info:
+- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the RWKV5 model to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'RWKV/HF_v5-Eagle-7B'`.
+- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'AI是什么?'`.
+- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`.
+
+#### Sample Output
+#### [RWKV/HF_v5-Eagle-7B](https://huggingface.co/RWKV/HF_v5-Eagle-7B)
+```log
+Inference time: xxxx s
+-------------------- Prompt --------------------
+User: hi
+Assistant: Hi. I am your assistant and I will provide expert full response in full details. Please feel free to ask any question and I will always answer it.
+User: AI是什么?
+Assistant:
+-------------------- Output --------------------
+User: hi
+Assistant: Hi. I am your assistant and I will provide expert full response in full details. Please feel free to ask any question and I will always answer it.
+User: AI是什么?
+Assistant: AI是人工智能的缩写,是指通过机器学习、深度学习、神经网络等技术,
+```
+
+```log
+Inference time: xxxx s
+-------------------- Prompt --------------------
+User: hi
+Assistant: Hi. I am your assistant and I will provide expert full response in full details. Please feel free to ask any question and I will always answer it.
+User: What is AI?
+Assistant:
+-------------------- Output --------------------
+User: hi
+Assistant: Hi. I am your assistant and I will provide expert full response in full details. Please feel free to ask any question and I will always answer it.
+User: What is AI?
+Assistant: AI (Artificial Intelligence) is a branch of computer science that deals with developing intelligent machines that can think and act like humans. It involves developing algorithms and techniques
+```
\ No newline at end of file
diff --git a/python/llm/example/GPU/HF-Transformers-AutoModels/Model/rwkv5/generate.py b/python/llm/example/GPU/HF-Transformers-AutoModels/Model/rwkv5/generate.py
new file mode 100644
index 00000000000..7099ab1b170
--- /dev/null
+++ b/python/llm/example/GPU/HF-Transformers-AutoModels/Model/rwkv5/generate.py
@@ -0,0 +1,85 @@
+#
+# Copyright 2016 The BigDL Authors.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+import torch
+import time
+import argparse
+import numpy as np
+
+from bigdl.llm.transformers import AutoModelForCausalLM
+from transformers import AutoTokenizer
+
+# you could tune the prompt based on your own model,
+# here the prompt tuning is adpated from https://huggingface.co/RWKV/HF_v5-Eagle-7B
+def generate_prompt(instruction):
+    instruction = instruction.strip().replace('\r\n','\n').replace('\n\n','\n')
+    return f"""User: hi
+Assistant: Hi. I am your assistant and I will provide expert full response in full details. Please feel free to ask any question and I will always answer it.
+User: {instruction}
+Assistant:"""
+
+
+if __name__ == '__main__':
+    parser = argparse.ArgumentParser(description='Predict Tokens using `generate()` API for RWKV5 model')
+    parser.add_argument('--repo-id-or-model-path', type=str, default="RWKV/HF_v5-Eagle-7B",
+                        help='The huggingface repo id for the RWKV5 model to be downloaded'
+                             ', or the path to the huggingface checkpoint folder')
+    parser.add_argument('--prompt', type=str, default="AI是什么?",
+                        help='Prompt to infer')
+    parser.add_argument('--n-predict', type=int, default=32,
+                        help='Max tokens to predict')
+
+    args = parser.parse_args()
+    model_path = args.repo_id_or_model_path
+
+    # Load model in 4 bit,
+    # which convert the relevant layers in the model into INT4 format
+    #
+    # Please note that for RWKV5 models, `optimize_model` is required to set as True
+    #
+    # When running LLMs on Intel iGPUs for Windows users, we recommend setting `cpu_embedding=True` in the from_pretrained function.
+    # This will allow the memory-intensive embedding layer to utilize the CPU instead of iGPU.
+    model = AutoModelForCausalLM.from_pretrained(model_path,
+                                                 load_in_4bit=True,
+                                                 optimize_model=True,
+                                                 trust_remote_code=True,
+                                                 use_cache=True)
+    model = model.to('xpu')
+
+    # Load tokenizer
+    tokenizer = AutoTokenizer.from_pretrained(model_path,
+                                              trust_remote_code=True)
+
+    # Generate predicted tokens
+    with torch.inference_mode():
+        prompt = generate_prompt(instruction=args.prompt)
+        input_ids = tokenizer.encode(prompt, return_tensors="pt").to('xpu')
+        # ipex model needs a warmup, then inference time can be accurate
+        output = model.generate(input_ids,
+                                max_new_tokens=args.n_predict)
+
+        # start inference
+        st = time.time()
+        output = model.generate(input_ids,
+                                max_new_tokens=args.n_predict)
+        torch.xpu.synchronize()
+        end = time.time()
+        output_str = tokenizer.decode(output[0], skip_special_tokens=True)
+        print(f'Inference time: {end-st} s')
+        print('-'*20, 'Prompt', '-'*20)
+        print(prompt)
+        print('-'*20, 'Output', '-'*20)
+        print(output_str)